Community-Contributed Sessions

Show/Hide Description
CCS.1: Polarimetric SAR Advanced Methods for optimum information extraction
T/S.4: SAR Imaging and Processing Techniques — PolSAR and PolInSAR
Polarimetry is going to start a new operational ere with the upcoming polarimetric satellite SAR missions (ALOS4, NISAR, and Tandem-L) equipped with digital antenna beaming. ALOS4 planned for launch in 2023 will permit polarimetric SAR imaging at 3m resolution and 100km swath. This is a significant advance in comparison with  the existing polarimetric satellite SAR missions (RADARSAT-2, ALOS2, and TerraSAR) of limited swath (50km). With the approach of this new ere of operational use of polarimetric SAR in support of key applications,  it is important to reconsider the state of the art in the methodology and tools  currently adopted for polarimetric information extraction. This special session will allow us to learn more about the most advanced tools recently developed for optimum polarimetric information extraction such as target scattering decomposition, speckle filtering, image classification, and polarimetric SAR modeling. The session that will gather edge leading guest scientists will also give the opportunity to  discuss the gaps that would have to be fulfilled to fully exploit the polarimetric information provided by satellite and airborne SAR in support of key applications.  

If accepted, this session would be the 14th session organized by Dr. R. Touzi on advanced polarimetric methods in the context of IGARSS since IGARSS`2008. The eleven IGARSS  sessions he organized and co-chaired with Dr. J.S. Lee from 2008 to 2019 (right before the COVID) were completely successful with 70-to-100 attendees (in person)  for each session. 

Dr. R. Touzi, the Chairman, is the recipient of the prestigious  2015 IEEE Fellow  award for his outstanding  contribution towards  the influence of the design and calibration of polarimetric satellite SAR missions, development of advanced methods and their validation through applications. He was the recipient of the 1999 Geoscience and Remote Sensing Transactions Prize Paper Award for the study on the “Coherence Estimation for SAR imagery”.

Prof. E. Pottier, the Co-chairman, was nominated  IEEE Fellow in  2011 with the accompanying citation: “for contributions to polarimetric synthetic aperture radar.”  He was the recipient of the 2007 IEEE GRS-S Letters Prize Paper Award, and the 2007 IEEE GRS-S Education Award in recognition of his significant educational contributions to geoscience and remote sensing. In 2009, Prof  Pottier has co-authored with Dr. Jong-Sen Lee a book, which  becomes a standard reference in polarimetry, entitled; Polarimetric Radar Imaging: From Basics to Applications (CRC Press, Taylor & Francis, 2009).




Show/Hide Description
CCS.2: DL paradigm for SAR Image Restoration: current methodologies and future trends
T/D.8: Data Analysis — Feature Extraction and Reduction
Deep Learning (DL) has become one of the main tool for many image processing tasks and, in particular, for image restoration and enhancements. The ability of automatically extracting features from the data allow to achieve State-of-Art performance in many fields such as denoising, super-resolution and so on. At the same time, SAR sensors have become one of the main tools for the Earth Observation providing images in any meteorological conditions, with always finer resolution and shorter revisiting time. 
Despite the continuous technological advancement, usually, the provided images require a restoration step as fundamental pre-processing operation for improving their interpretation and facilitating further tasks and applications in several remote sensing domain such as scene classifications, segmentation, 3D reconstruction, target detection and etc. 
Indeed, the acquisition system, the image formation and the coherent nature of SAR sensors make the interpretability of SAR data a challenging task: presence of noise, phase wrapping phenomena, geometric distortions are all elements that impair the performance of the above mentions applications. 
Following the robust performance of DL based method in natural domain image restoration, researchers, recently, has invested a lot of effort on the definition of DL based methods for exploiting its potential in the SAR domain, too. Therefore, in the last years, plenty of DL based methods for SAR pre-processing have been defined. Indeed, the versatility, efficiency, and wide applicability of DL methods perfectly fit with the need of a fast and precise processing of SAR images for a continuous Earth Observation.
The actual obtained results are impressive, but many issues are still open for many applications: lack of a real ground truth, a comprehensive assessment of the results on real data, investigation on the architectures, merging topic-based knowledge and DL paradigm, supervised and unsupervised training.
This session, beyond of a presentation of the actual state-of-art, addresses the open issues related to application of DL in the SAR domain: generation of the training set, speckle simulation and modelling, new network architectures, representations suited to SAR images, extensions to polarimetric and interferometric data and better understanding the results produced by the networks. The aim is to stimulate an open discussion on the presented results and on what is worth to investigate in the near future.
Show/Hide Description
CCS.3: Representation learning in remote sensing
T/D.8: Data Analysis — Feature Extraction and Reduction
Over the last decade, machine learning and deep learning paradigms have experienced an astonishing development in Earth observation, e.g., in land cover classification, forest and disaster mapping. However, most of the deep learning-based methods developed for remote sensing are supervised. A major pitfall of deep learning based supervised techniques is their high dependence on a large and well-representative corpus of labelled data. It is expensive and time-consuming to obtain such labels in Earth observation. Thanks to the Copernicus program by the European Space Agency, a massive amount of unlabeled Earth observation data is currently available. Supervised methods do not effectively exploit this abundant pool of unlabeled data.

In this session we will gather the community looking into new ways for decreasing the need for label information in deep learning models: approaches rooted in weakly-, self-supervised and meta learning will be discussed by leaders in the field, thus

debating the adequacy of current strategies, as well as the way forward for robust, multitask and adaptive models for Earth observation.
Show/Hide Description
CCS.4: 3D mapping with remote sensing data for Digital Twins, disaster management, and building information
T/D.10: Data Analysis — 3D mapping
3D mapping is a fundamental process for creating three-dimensional digital representations of reality that are required for both local modeling e.g. of water/air circulation and global monitoring like oceans altimetry. Many of these processes involve remote sensing techniques and data. This session addresses the current challenges that arise in the context of 3D mapping from RS collections and processing workflows: How do you cost-effectively create a high-quality, reliable, sufficiently up-to-date and accurate dataset on a regional and global scale? What role does AI play in this context? What is the role of 3D mapping in building and operating Digital Twins of the urban environment? How do we integrate LiDAR, SAR, optical imagery, and in-situ sensing together  with AI to produce models that are both photo-realistic and functionally object-oriented? How can we use 3D environments in models for  simulation and prediction? How can we exchange and combine information about 3D objects within and between models? What role does remote sensing play for infrastructure/urban planning, governance, proactive maintenance and monitoring?
Show/Hide Description
CCS.5: Algorithms and data analysis of multi-frequency SAR data for scientific downstream applications
T/D.14: Data Analysis — Change Detection and Temporal Analysis
Current satellite Synthetic Aperture Radar (SAR) missions provide an unprecedentedly wide spectrum of observations of the Earth’s surface ranging from X- to L-band. Therefore, the Geoscience and Remote Sensing Society (GRSS) community is increasingly provided with: (i) continuity of observations with respect to previous SAR missions; (ii) opportunities to task collection of spatially and temporally co-located datasets in different bands. 
The challenge is now to develop processing algorithms and data analysis methods that can make the best out of this multi-frequency observation capability, in order to address a multitude of applications. These applications include but are not limited to: retrieval of geophysical parameters; land cover classification; interferometric (InSAR) analysis of ground deformation and for structural health monitoring; generation of value added products useful to end-users and stakeholders, for example for civil protection, disaster risk reduction, resilience building.
At IGARSS 2022, this challenge was deeply investigated in light of the most recent algorithm developments achieved by the scientific community at the mid-term review of the “Multi-mission and multi-frequency SAR programme” launched by the Italian Space Agency (ASI) in 2021. As a confirmation that processing and analysis of multi-frequency SAR data are topical research themes for the Geoscience and Remote Sensing community, the session in 2022 was highly contributed with 12 high quality papers and a constant attendance by the delegates.
One of the key outcomes was that the community of public research bodies and industry – also in the framework of international partnerships – has designed, developed and tested innovative methods, techniques and algorithms for exploitation of multi-mission/multi-frequency SAR data, with credible perspectives of engineering and pre-operational development, thus being able to contribute to the improvement of socio-economic benefits of end-users. This is what, in the space agencies’ jargon, is referred to as “scientific downstream”, i.e. applications enabled by mature and validated algorithms developed upon robust scientific methodology and experiments.
The present session therefore aims to build upon this evidence and showcase the improvements and further progress achieved by the community after one year, at the end of the ASI’s Multi-mission and multi-frequency SAR programme”. Moreover, the session aims to let the research consortia to discuss their methods with the larger international community, in order to assess how their processing routines, prototype products and platforms position with regard to the current state-of-the-art.
To this purpose, as a Community Contributed Session, the session will couple solicited talks with open-call contributions submitted by scientists of the whole Geoscience and Remote Sensing community working with multi-frequency SAR data. The expectation is for scientific and technical papers focused on novel processing algorithms and routines of X, C and L-band SAR data (including but not limited to COSMO-SkyMed First and Second Generation, Sentinel-1, SAOCOM) and methods for data fusion and integration (e.g. for applications based on change detection, time series analysis, classification, parameter retrieval). Papers presenting generation and demonstration of value-added products are particularly welcome.
Show/Hide Description
CCS.6: How can multi-temporal InSAR coherence benefit your application? Reports from recent findings in various applications
T/D.14: Data Analysis — Change Detection and Temporal Analysis
Sentinel-1 provides a stable and consistent acquisition scheme with constant sampling rate (6 or 12 days) and nearly-zero spatial baseline. This has enabled the possibility to investigate on extracting information from time series of repat-pass interferometric coherence at both global scale and based on long time records. Coherence enlarges the observation space of radar measurements, beyond intensity (radiometry) and polarimetry. This extra channel provides information complementary to the other two dimensions of the data.  

Coherence has proven useful for land cover classification, crop-type mapping, and retrieval of biophysical parameters, such as crop vegetation indices, and forest tree height and biomass. 

Planned and future sensors (NISAR, ROSE-L, S1-D, etc.) will be characterised also by consistent acquisition schemes with constant revisit time and nearly-zero baseline. Therefore, in this session we review the findings already available and outlook the potential of the coherence time series from future systems.
Show/Hide Description
CCS.7: Searching for Activities from Heterogeneous EO Satellite Sensors
T/D.14: Data Analysis — Change Detection and Temporal Analysis
Over the past two decades, the pace of innovation in the space remote sensing industry has accelerated.  These innovations are driving interest in exploring new capabilities that leverage the increasing diversity and volume of satellite imagery.   A new challenge for the remote sensing community is to understand how to exploit these satellite sensors with their varying imaging capabilities for a particular application.

The goal of this session is to explore recent advances in one such challenge: the development of tools and techniques for automated broad-area search (BAS) to detect, monitor, and characterize the progression of anthropogenic activities using time-series spectral imagery from multiple space-based sensors.  In contrast to traditional problems of object detection or pixel classification, this workshop will cover recent advances in the exploitation of heterogeneous satellite sensors to detect and characterize a pre-defined spatio-temporal activity over large regions.  

Specifically, the workshop will cover 
1.	Release of a new satellite image dataset from multiple sensors with annotations of heavy and light construction sites.
2.	Approaches for annotating activities that are changing in space and time.  Instead of using pixels masks to denote the locations of objects, new methods are required to annotate the spatial and temporal extents of activities on the ground. 
3.	New metrics for analyzing the performance of activity detection and characterization algorithms
4.	AI solutions that apply contextual reasoning over changes in background, atmospheric conditions and viewing geometries to detect man-made activities.  Use of physics-based, statistical, and modern deep learning approaches for broad area search of anthropogenic activities will be presented.
5.	AI solutions for monitoring the progression of activities using heterogeneous satellite imagery.  Signal processing and deep learning methods for time series analysis will be presented for activity classification and prediction.
Show/Hide Description
CCS.8: Rare target detection in spectral imagery
T/D.15: Data Analysis — Hyperspectral Data Processing and Analysis
Having acquired remote sensing imagery of a scene, one often wants to determine whether targets of interest can be found in that imagery. The targets may be specific, such as gas plumes from a leaking storage tank or military tanks from an invading army or disease-infected trees in a large forest. Alternatively, one may simply seek anomalies or anomalous changes whose properties are later determined by follow-up investigations.

This session will explore models and methodologies for detecting rare targets in multispectral or hyperspectral imagery. Rare targets are small compared to the image itself, possibly even smaller than a single pixel, and appear sparsely if at all in any given image. One hallmark of algorithms for rare targets is the need to operate in the low false alarm rate regime. Another is that there are typically too few actual targets in actual images to provide a statistically adequate representative of samples for direct training. Successful algorithms may have to incorporate both physical and data-driven modeling of the target and background.

Although the efficacy of a given algorithm on a given problem can be encapsulated in a single ROC curve, the topic lends itself to broader questions. One is how best to manage the trade-off between signal processing and machine learning, particularly as deep convolutional neural networks become more efficient and powerful. Another is the relative importance of modeling the non-Gaussian background, characterizing the spectral variability of the targets, and appropriately dealing with unknown nuisance variables such as target size or strength. It is anticipated that the session will lead to fruitful discussions aimed at tackling these and other relevant themes.

In this regard, the session will bring together experts in the field to discuss how theoretical solutions can be tailored to real-world applications, to explore new emerging approaches and technologies, and to outline future directions.

The session will start with exploring next-generation low size, weight, and power (SWaP) hyperspectral sensors for rare target detection applications. Then detection of remarkably weak, subtle, and difficult targets will be addressed by dealing with greenhouse gas point sources and occluded objects. The following three contributions will step into the arena of hyperspectral advanced data modeling, by focusing on alterative representations for hyperspectral data and providing insights into complex background modeling. The  ‘back-up’ contribution addresses anisotropic distributions for modeling non-target pixels. The back-up is listed in the unlikely event that one of our main speakers is ultimately unable to present. (If we do have a full session, the back-up contribution will be submitted to IGARSS as a general paper, and can be assigned hy the organizers to the appropriate session.)
Show/Hide Description
CCS.9: Recent Advances in Hyperspectral Image Processing: Methodology and Application
T/D.15: Data Analysis — Hyperspectral Data Processing and Analysis
Remote sensing technology is an important technical means for human beings to perceive the world, and hyperspectral remote sensing technology has become the mainstream of current research. Hyperspectral image classification(HSIC) is a pixel-level classification task, which is mainly used for fine extraction and recognition of ground object information. HSIC is the basis for subsequent practical application tasks of hyperspectral images and has very important research significance, which is widely used in digital precision agriculture, environmental monitoring, national defense and military strategy and other fields. With the rapid development of artificial intelligence technology, many new hyperspectral image classification methods and algorithms have been proposed. Moreover, rapid advances in these methods have also promoted the application of associated algorithms and techniques to problems in many related fields. This Session aims to report and cover the latest advances and trends about the Recent Advances in Hyperspectral Image Classificaion. Papers of both theoretical methods and applicative techniques, as well as contributions regarding new advanced methodologies to relevant scenarios of remote sensing images are welcome.
Show/Hide Description
CCS.10: Ushering a new era of spaceborne hyperspectral sensors to propel gigantic leap in the earth remote sensing
T/D.15: Data Analysis — Hyperspectral Data Processing and Analysis
We are entering a new era of hyperspectral remote sensing with data acquisition from several spaceborne sensors. These include data from, already in orbit sensors such as the: 1. German Deutsches Zentrum fur Luftund Raumfahrt (DLR’s) Earth Sensing Imaging Spectrometer (DESIS) sensor onboard the International Space Station (ISS), 2. Italian Space Agency (ASI) PRISMA (Hyperspectral Precursor of the Application Mission), and 3. German DLR’s The Environmental Mapping and Analysis Program (EnMAP). Further, Planet Labs PBC recently announced the launch of two hyperspectral sensors called Tanager in 2023. The NASA is planning hyperspectral sensor Surface Biology and Geology (SBG) mission. Further, we already have over 70,000 hyperspectral images of the Planet acquired from NASA’s Earth Observing-1 (EO-1) Hyperion that are freely available to anyone from U. S. Geological Survey’s data archives. These suites of sensors acquire data in 200 plus hyperspectral narrowbands (HNBs) in 2.55 to 12 nm bandwidth, either in 400-1000 or 400-2500 nm spectral range with SBG also acquiring data in thermal range. HNBs provide data as “spectral signatures” in stark contrast to “a few data points along the spectrum” provided by multispectral broadbands (MBBs) such as the Landsat satellite series. 
Our overarching goal in this community contributed session is to study world’s leading agricultural crops using HNB data and compare with MBB data and see where and how we can make advances in crop type classification, crop health and stress studies, and in quantifying crop biophysical and biochemical parameters. The session will address: 
1. Approaches to hyperspectral data analysis including issues of Hughes’s phenomenon and data redundancy, 
2. Strategies of analyzing hyperspectral data either through optimal bands or full spectrum, 
3. Building global hyperspectral imaging spectral library of crops (GHISA), 
4. Developing hyperspectral vegetation indices (HVIs), and 
5. Establishing the performance of HNBs and HVIs in study of world crops.
Show/Hide Description
CCS.11: Coexistence of Communication and Passive Sensing Technologies
T/D.16: Data Analysis — RFI Detection and Mitigation
The dramatic growth in use of wireless technologies has greatly benefitted society in many areas. However, the proliferation of new application technologies, such as the Internet of Things (IoT), Unmanned Aircraft Systems (UAS), radars for transportation and motion sensing, as well as new infrastructure technologies such as 5G wireless, has brought forth new challenges that must be addressed in light of the demand on the wireless spectrum placed by such applications. The electromagnetic (EM) spectrum is limited and must be appropriately shared among all wireless systems and applications, including both active and passive uses. The increasing demand for spectrum is largely driven by commercial services such as mobile broadband wireless access. At the same time, passive uses of the spectrum including radio astronomy service (RAS) and atmospheric and geospace science under the earth exploration-satellite service (EESS) as well as critical but non-commercial active uses such as weather radar and the Global Positioning System (GPS) need to be preserved. These services are bound by physical constraints that prevent any relocation of the spectrum, and hence the operations of these services have to be protected.


Effective spectrum utilization while ensuring coexistence, especially with passive uses is critically important. This will require substantial innovation in both wireless and sensing technologies. Passive remote sensing services are indispensable in modern society. One important remote sensing
application for Earth science and climate studies is soil moisture monitoring, which provides crucial information for agricultural management, forecasting severe weather, floods and droughts, and climate
modeling and prediction. Unfortunately, the growth of active wireless
systems often increases radio frequency interference (RFI) experienced by passive systems. At best, RFI may introduce bias in the passive system's measurements; at worst, it may render the passive system completely useless. The goal of this session is to bring both communication and remote sensing researchers together and present and discuss 
recent research results in the coexistence of communication and remote sensing systems with a special focus on passive systems. RFI detection and mitigation techniques, Spectrum sharing, Learning based coexistence approaches, and Testbed development are some of the topics that are considered for the session. 

Show/Hide Description
CCS.12: Radio Frequency Interference in microwave remote sensors
T/D.16: Data Analysis — RFI Detection and Mitigation
Radio Frequency Interference (RFI) has been having an increasingly detrimental impact on microwave remote sensing. This situation happens even in frequency bands exclusively allocated to remote sensing due to illegal transmitters or out-of-band emissions from adjacent services whose operating power leaks into the remote sensing allocated band. The current need for spectrum from commercial services, in particular from the International Mobile Telecommunications service, is increasing even more the pressure on the Earth Exploration Satellite Services (EESS) frequency allocated bands. 

Among those, the passive services are the ones at most risk of observing interference due to the critical sensitivity of their sensors. The microwave radiometers used by the EESS passive service receive emissions radiated by natural sources. These emissions are at very low levels compared to those generally handled by other radio communications services; therefore, the passive sensors are generally very susceptible to RFI originated from man-made emitters on the ground, on aircraft, or other spaceborne systems. 

Active instruments are also affected by RFI and need to develop strategies to mitigate the effect of interference. Radar remote sensing instruments have played an increasingly important role in understanding Earth and its dynamic processes. Their operational frequencies are predetermined based on the physical properties that are to be retrieved. Consequently, their ability to operate at alternative frequency bands is limited. However, the commercial user demand for additional radio frequency spectrum has been increasing which has resulted in unfavorable sharing conditions for active remote sensing. 

Remote sensing sensors are experiencing more and more often problems related to interference caused by other services and this observed trend is expected to increase in the future. However, it must be recognized that mitigating the effect of interference cannot completely eliminate the impact interference has on sensor measurements and the conclusions drawn from those measurements. RFI observed in both passive and active sensors causes information loss, leads to wrong conclusions by undetected corrupted data, and / or reduces the accuracy of the measurements. In some cases, the presence of RFI even jeopardizes the objectives of the mission. RFI is one of the biggest threats to remote sensing missions and needs proper attention in all current and future missions.

The session will present the latest developments in interference detection, in interference geo-location and in mitigation techniques for both passive and active remote sensors and reports on the status of current and upcoming missions with regards to dealing with Radio Frequency Interference. 
Show/Hide Description
CCS.13: Advances in Multimodal Remote Sensing Image Processing and Interpretation
T/D.17: Data Analysis — Data Fusion
Recent advances in sensor and aircraft technologies allow us to acquire huge amounts of remote sensing data. Diverse information on Earth's surface can be derived from these multi-resolution and multimodal data, providing a much more comprehensive interpretation for Earth observation, e.g., spectral information from multi- and hyperspectral images can help to reveal the material composition, elevation information from LiDAR data helps to estimate the height of the observed objects, synthetic aperture radar (SAR) data can measure dielectric properties and the surface roughness, panchromatic data are instead focused on spatial features of the acquired landscape, and so forth.
 
State-of-art works have proven that the fusion of these multi-resolution and multimodal images provided better performance than those only using a single image source. However, challenges remain when applying these data for some applications (classification, target detection, geological mapping, etc.). For example, classical issues could be related to the misalignment of multimodal images, the presence of clouds or shadows (in particular, when optical data are involved), and spectral/spatial differences hampering the post-fusion of these data.
 
This invited session will focus on multi-resolution and multimodal image processing and interpretation, such as multimodal image alignment, restoration, sharpening of multi-spectral and hyperspectral images (e.g., pansharpening, hyperspectral pansharpening, and hypersharpening), use of machine learning approaches devoted to several tasks (e.g., feature extraction and classification) exploiting the multimodality of the data, and so forth. We will discuss the latest methods/techniques for multi-resolution and multimodal image processing, as well as how this can benefit our interpretation.
Show/Hide Description
CCS.14: Data Fusion: The AI Era
T/D.17: Data Analysis — Data Fusion
The continuous success of remote sensing in various applications including cartography, the production of semantic maps, monitoring, hazard management, and many more, led to the development and deployment of modern sensors that provide information about various aspects of the state of the Earth. These sensors provide data of multiple modalities ranging from different types of imagery (e.g. optical, hyperspectral, synthetic aperture radar), over geometric measurements (e.g. LIDAR), to other types of data such as positioning information via GNSS. While industry and research focused on only one of such sensors in the early days of remote sensing, modern approaches aim to fuse multiple data sources as they often provide complementary information. Nowadays, even more data sources are available such as crowd-sourced photographs, oblique images, and data from social networks which opens a new category of approaches to the most challenging tasks in Earth monitoring and understanding.

Despite the availability and benefits of multimodal data (which includes multi-sensor, but also multi-frequency and multi-temporal data), the fusion of the information of these sources is non-trivial and an ongoing hot research topic. Modern challenges such as big data as well as modern approaches such as deep learning have only been started to be properly addressed. Data fusion in general and image fusion in particular are therefore very contemporary research areas with an extensive need to exchange ideas, discuss open problems, introduce new datasets, and propose new solutions.

The Image Analysis and Data Fusion Technical Committee (IADF-TC) of the Geoscience and Remote Sensing Society focuses exactly on these challenges and aims at fostering connecting people and resources, educating students and professionals, and promoting the best practices in data fusion applications. One of the activities of the IADF-TC is the organization of a special session held annually during IGARSS, gathering cutting-edge contributions and covering various issues related to data fusion, such as pan-sharpening, decision fusion, multi-modal data fusion, data assimilation, multi-temporal data analysis, ensemble methods, etc. Additionally to these rather traditional topics, modern approaches including the topics designated as of being of special interest for IGARSS 2019 (in particular “Big data and machine learning”) are frequently discussed as well.

The proposed session has a long and successful history within IGARSS being held consecutively for more than ten years. Thus, it is a very well established session which has the full attention of senior researchers as well as of young scientists. Furthermore, it addresses topics which are of increasing importance within remote sensing and geoscience and which are of interest to a very interdisciplinary audience based on both, methods as well as applications.
Show/Hide Description
CCS.15: IEEE GRSS Data Fusion Contest - Track 1: Building Detection and Roof Type Classification
T/D.17: Data Analysis — Data Fusion
The Image Analysis and Data Fusion Technical Committee (IADF TC) yearly organizes the IEEE GRSS Data Fusion Contest, an open competition based on freely provided multi-source data and aiming at stimulating the development of novel and effective fusion and analysis methodologies for information extraction from remote sensing imagery. Similar to the last few years, the 2023 edition of the Contest is planned to include a competition based on the performance of developed approaches on the data released. The developed algorithms will be ranked based on their performance accuracy and the winners will be awarded during IGARSS 2023, Pasadena, USA.

Following the same approach as in the 2016-2022 editions of the DFC and IGARSS, the IADF TC is proposing the present special session that is aimed at timely presenting the most effective and novel contributions resulting from the competition. A session is proposed, in which the best ranking submitted papers will be presented, and two slots will be used by
the Contest organizers to summarize the outcome of the competition. According to the schedule of the Contest, the session is currently proposed without explicitly mentioning speakers and tentative titles but will be filled in after the competition is completed. It is worth noting that the corresponding papers would not go through the submissions of regular papers but would be reviewed directly, in full paper format, by the Award Committee of the Contest. This process will ensure both thorough quality control and consistency with both the timeline of the Contest and the final paper submission deadline to IGARSS 2023.
Show/Hide Description
CCS.16: IEEE GRSS Data Fusion Contest - Track 2: Multi-Task Learning of Joint Building Extraction and Height Estimation
T/D.17: Data Analysis — Data Fusion
The Image Analysis and Data Fusion Technical Committee (IADF TC) yearly organizes the IEEE GRSS Data Fusion Contest, an open competition based on freely provided multi-source data and aiming at stimulating the development of novel and effective fusion and analysis methodologies for information extraction from remote sensing imagery. Similar to the last few years, the 2023 edition of the Contest is planned to include a competition based on the performance of developed approaches on the data released. The developed algorithms will be ranked based on their performance accuracy and the winners will be awarded during IGARSS 2023, Pasadena, USA.

Following the same approach as in the 2016-2022 editions of the DFC and IGARSS, the IADF TC is proposing the present special session that is aimed at timely presenting the most effective and novel contributions resulting from the competition. A session is proposed, in which the best ranking submitted papers will be presented, and two slots will be used by
the Contest organizers to summarize the outcome of the competition. According to the schedule of the Contest, the session is currently proposed without explicitly mentioning speakers and tentative titles but will be filled in after the competition is completed. It is worth noting that the corresponding papers would not go through the submissions of regular papers but would be reviewed directly, in full paper format, by the Award Committee of the Contest. This process will ensure both thorough quality control and consistency with both the timeline of the Contest and the final paper submission deadline to IGARSS 2023.
Show/Hide Description
CCS.17: Modern Approaches to Multi-Dimensional Data Fusion
T/D.17: Data Analysis — Data Fusion
Studies on Earth processes are benefiting from the rapid development of new remote sensing methodologies. Data obtained from a multitude of techniques, such as Synthetic Aperture Radar (SAR) and optical images from different types of sensors, can be collected and conveniently analyzed to create research opportunities. As more data becomes available, new processing approaches are required to combine (merge) diverse datasets, improve spatial image resolution, analyze sparse observations, and study temporal changes. 

The main challenges for data fusion algorithms are the nonlinear dependencies among remote sensed variables, effects of geophysical parameters, and necessary assumptions such as coherent reflections. Additionally, verification datasets, provided mostly by in-situ sensors, present inherent spatial irregularity, given that there are constraints on where they can be placed to generate measurements. For the aforementioned reasons, new data fusion algorithms that allow integration of diverse datasets from multiple space-borne instruments, terrain attributes, overlapping imagery, and irregular in-situ data are necessary. 

New data fusion algorithms should aim to satisfy the spatial and temporal resolution requirements of novel remote sensing applications. For instance, multitemporal analysis of remote sensing images acquired over the same geographical area but at different times will enable scientists to identify heterogeneous changes on Earth’s surface. This type of temporal analysis will be possible using data fusion algorithms that take into consideration satellite observations and their nonlinear relations with dynamic land parameters. Excluding the temporal variations of land parameters could lead to ambiguity in retrievals, and therefore,  land geophysical products with larger errors.

For spatial-data optimization, fusion methods that can increase the resolution of images while incorporating auxiliary terrain information are needed; these merging practices can improve sensitivity to topography, surface roughness and vegetation changes. Furthermore, merging approaches that can complement geostatistical methods could be explored to incorporate auxiliary information from multiple diverse products while being robust to outliers deviations. 	

Data fusion techniques have aided researchers in the remote sensing field to improve the understanding of Earth processes that cannot be modeled by simple relationships among sensed parameters. Although conventional signal processing and geostatistical methods have been used to study various datasets, new fusion algorithms are needed to combine irregular datasets that present different spatio-temporal resolutions. Modern fusion methods should aim to improve spatial and temporal analysis while incorporating relationships between sensed variables and terrain information.
Show/Hide Description
CCS.18: Multi Sensor Data Fusion with AI for precision Land cover Information Retrieval
T/D.17: Data Analysis — Data Fusion
Nowadays, with the multi-sensors huge data set are available. These sensors like Optical, Microwave and Thermal provides some time complimentary information which is quite useful for land cover monitoring.   Once the huge data with multi-sensor is acquired, then it is important to process it properly for obtaining the required information. For this data fusion techniques like pixel based, information based etc may help. Development of a information retrieval system for land cover with a good image fusion paradigm aims on fusion of information (primarily processed satellite/drone data/image or video sequence) for maximum extraction of useful features and information. Hence, data fusion is the process of integrating multiple data sources to produce more consistent, accurate, and functional information than that provided by any individual data source. The type of fusion along with its art of implementation, way of execution and choice of technique will determine the accuracy and robustness of the application being implemented. Challenges of fusion during the process such as temporal irregularities, spatial and spectral imbalance should be taken in consideration. Accurate and precise target detection of a camouflage target is based primarily on spectral imaging technique, while preserving the spectral and spatial properties of the imagery. Artificial Intelligence (AI) can be aptly described as “machines that can perform tasks which are characteristic of human intelligence.” Availability of huge data sets, scattered information, and big data along with increased processing agility has led to the use of AI for better results in pattern recognition (PR).
Ability to retrieve target information depends on the type of sensor, multi-sensor fusion methodology, retrieval technique/algorithm and data processing paradigm. The main motivation of information retrieval system is primarily retrieving sub-pixel information, where the challenge arises due to issues such as background complexity, target(s) with nearly same spectral signature as other ground targets around and the background, pre-existing illumination and noise issues, spectral variability, errors such as prediction errors, false detection rate and data redundancy, presence of abrupt motion, level of training needed in ANNs, sub-pixel information, mixed pixel information etc. Using imagery at different sensor levels and fusing them together is encouraged because of increased interpretation capabilities, better resolution, availability of abundant features such as distinct textures with lower noise in fused image while preserving the spatial resolution, improved classification with better accuracy and overall feature enhancement. Thus, need of the hour is development of a rational, strategic and feasible information retrieval technique for land cover information that identifies sub-pixel/Mixed Pixel information accurately and functions robustly on different sensor data sets which will be helpful for precision monitoring of crop, land cover, area estimation etc
Topics covered as following but not limited to this
1.	AI based data fusion for Sub-pixel and Mixed Pixel information retrieval
2.	Land cover monitoring with Data Fusion 
3.	Crop Classification with AI based Data Fusion and crop health retrieval
4.	Application of feature extraction/ template matching with AI for crop monitoring with Multi-sensor data.
5.	Sub pixel mixing models and use of AI for land cover identification 
6.	Application of Course resolution data with the fusion technique for soil moisture, crop health retrieval
7.	A Decision Support System (DSS) for various agriculture activities such as; pest control planning, irrigation planning, or disease mitigation can be realised with Multi-sensor data and application of AI.
Show/Hide Description
CCS.19: Multisensor data fusion and geospatial data intelligence
T/D.17: Data Analysis — Data Fusion
Abstract: 
Various earth observation platforms (e.g., satellites, aircrafts, and drones) and types of sensors (e.g., LiDAR, optical, SAR, hyperspectral) along with in-situ sensors have noticeably been increased during the last decade. As a result, massive amounts of rich data are systematically produced. This increase in the number and heterogeneity of earth observation data sources presents both opportunities and challenges. Opportunities result from the refinement of sensors and sensing methods, the ability to capture increasingly granular data via those improved sensors and methods, and the improved data storage, manipulation, analysis, and interpretive capacity of modern computational hardware and software, cloud computing and IoTs. Challenges result from the difficulty in accurately fusing data, so that a coherent, comprehensive, fully detailed, and integrated understanding of data from multisensor. This session brings together state-of-the-art research and development on AI algorithms, techniques, and software tools to address the above-mentioned challenges and to better exploit spatial and temporal features and dependencies in RS data. The focus of the session will be on data-driven models (such as CNN, GAN, RNN, etc.)  that are highly adaptable to changes in large remote sensing dataset and can find unexpected patterns in multi-source and multi-temporal data.
Show/Hide Description
CCS.20: Advanced Data Management Technologies for Remote Sensing Data
T/A.19: AI and Big Data — Data Management Systems and Computing Platforms in Remote Sensing
In recent years, the big evolution in remote sensing data due to the huge advantages in both satellites and Instruments technologies has resulted in 58.6 PB data archive with more than 2 billion of data products distributed (and continue to quickly grow). This evolution exposes the Earth Science community to more research opportunities as well as huge challenges including but not limited to big data management of PBs of data, cloud data management, cloud computing, data discovery, data access control, insight knowledge extraction from vast data stores. The aims of this session on Advance Data Management Technologies for Remote Sensing Data are to promote the effective uses of appropriate data management technologies and to increase the value of Remote Sensing data. Submissions to this session would describe consolidated requirements, novel research topics, use cases of emerging technologies, and application trends. Topics may include, but are not limited to:

- Spatio-temporal database solutions for Big Remote Sensing Data
- Graph databases, knowledge graph, graph visualization, web semantics tools/technologies for managing and discovering data and metadata in Remote Sensing
- Novel database solutions for Analysis-Ready Data (ARD)
- Using Big Data management and analysis tools for Remote Sensing data management and analysis
- (Near-)Real-time intelligence from Remote Sensing platforms

The session also welcomes submissions showcasing use cases that describe adaptations and enhancements of data management technologies for use in real-world remote-sensing applications.
Show/Hide Description
CCS.21: Cloud-based platform environments for Earth Observation: enabling Science & Applications
T/A.19: AI and Big Data — Data Management Systems and Computing Platforms in Remote Sensing
Cloud-based environments for the processing and analytics of Earth Observation data have become an essential solution to dealing with unprecedented data volumes available today. 

Over the past two decades, many such cloud-based platforms for EO have been developed. The differ for example in ways that data is stored and accessed, how users interface with the platforms and in terms of the analytic capabilities that they enable. At the same time, some technology components are emerging (e.g. STAC, Xarray, Dask, openEO, etc.) that have been adopted in several solutions and that have experienced widespread community adoption. 

This session showcases a selection of EO platform environments from Europe and the US. It will highlight different concepts in terms of architectures, user communities and enabled analytics. 

Show/Hide Description
CCS.22: Continental to global mapping applications enabled by cloud-based platform environments.
T/A.19: AI and Big Data — Data Management Systems and Computing Platforms in Remote Sensing
Cloud-based platform environments for Earth Observation (EO) enable scaling  of value-adding algorithms and applications over large geographic scales. Over the past decade, several global and continental higher-level products have been produced from EO data with high spatial resolution (e.g. global forest changes, global cropland extents). Such large scale products can be of considerable value to the Earth Science community. However, the required workflow implementation is challenging from a scientific and technology perspective. Moreover, quality assurance and validation can provide to be extreme challenging. This session will highlight a few recent achievements in terms of continental scale applications and mapping efforts and highlight the cloud-based environment and architectures that have enabled these. 
Show/Hide Description
CCS.23: Earth Observation Dashboard
T/A.19: AI and Big Data — Data Management Systems and Computing Platforms in Remote Sensing
In May 2020, NASA, ESA, and JAXA initiated a collaborative effort aiming at the establishment of the COVID-19 Earth Observation Dashboard and later in March 2021, extended its scope to global environmental change. 

Noting the increasing use of the joint Dashboard and the continuous users' requests for more information, NASA, ESA, and JAXA will continue through June 2024 to advance their joint work in the global understanding of the changing environment with human activities. This decision continues the collaboration on the analysis of three agencies' datasets and open sharing of data, indicators, analytical tools, and stories sustained by our scientific knowledge and expertise, to provide a precise, objective, and comprehensive view of our planet as an easy-to-use resource for the public, scientists, decision-makers, and people around the world
The dashboard provides an easy-to-use resource for all kinds of the public from the scientist to the decision-maker, including people not familiar with satellites. Based on accurate remote sensing observations, it showcases examples of global environmental changes on 7 themes: Atmosphere, Oceans, Biomass, Cryosphere, Agriculture, Covid-19, and Economy. The dashboard offers a precise, objective, and factual view without any artifacts of our planet. You can explore countries and regions around the world to see how the indicators in specific locations changed over time.

ESA, JAXA, and NASA will continue to enhance this dashboard as new data becomes available. This session explores this EO dashboard architecture, function, examples of thematic content through storytelling, and its utility amongst the broader EO and Data Science community. 
Show/Hide Description
CCS.24: GRSS ESI/HDCRS - Harnessing the Power of Quantum Computing for Machine Learning
T/A.19: AI and Big Data — Data Management Systems and Computing Platforms in Remote Sensing
The last few years have seen many advances in quantum resources that can be applied to Earth Observation, especially in quantum communications and quantum sensing. Although the development of quantum computing systems has been slower and is not as mature, quantum computing holds big promises for advancing artificial intelligence, especially Machine Learning.

These advances will be essential for speeding up the processing, analysis and ingestion of data into science and human models as well as for running surrogate models that will be the core of what-if investigations in future Digital Twin systems, and therefore will support Sustainable Development Goals such as "Climate Action”, and also the understanding of high complexity planetary observations.
 
This session will explore the development of novel Quantum Machine Learning frameworks and algorithms for processing and analyzing Earth observing data. This includes quantum-assisted and quantum-inspired algorithms as well as theoretical analyses, simulations and preliminary results obtained on actual quantum computing systems.
 
The paper may address the main topics of quantum algorithms design such as but not limited to: classical data embedding into quantum states, quantum state transformations, quantum circuits design, quantum state measurement, quantum machine learning or neural networks. If not theoretical, the methods are expected to be implemented on actual quantum computers such as the D-Wave quantum annealer or IBM and results should be benchmarked vs. the “classical” counterparts. Applications dedicated to the special interest in Earth Observation and Remote Sensing of Planetary and other Celestial Bodies are very much welcome.

The organization of this special session is one of the activities promoted by the IEEE GRSS High Performance and Disruptive Computing in Remote Sensing (HDCRS) working group. HDCRS is part of the IEEE GRSS Earth Science Informatics (ESI) Technical Committee. More information can be found at https://www.hdc-rs.com/.
Show/Hide Description
CCS.25: GRSS ESI/HDCRS - Quantum computing next generation HPC
T/A.19: AI and Big Data — Data Management Systems and Computing Platforms in Remote Sensing
Recent High Performance Computing (HPC) architectures and parallel programming models have been influenced by the rapid advancement of Artificial Intelligence (AI) and hardware accelerators (e.g., GPUs, FPGAs). The classical workloads that run on HPC systems (e.g., numerical methods based on physical laws) are becoming more heterogeneous. This is a trend that is currently amplified by the rapid development of novel quantum technologies, algorithms and applications. The use of quantum computers asks new scalable SW engineering techniques to be integrated with the “classical” HPC at HW and SW levels. Quantum programming languages control physical specialized devices, coding and code verification is a new field tightly connected to the data and application nature. The design of quantum circuits is supported by “classical” programming, simulators or Python. Results are obtained either using simulators that run on the user's own device, quantum software development kits. These tools need adaptations for Geoscience and Remote Sensing Society (GRSS) domains and the particular quantum complexity classes. The session calls for specific topics at the convergence of HPC and Quantum Computing (QC) impacting the applications in the GRSS areas. GRSS applications are challenged by Big Data from Earth Observation, geological, environmental or other related heterogeneous data. To date, state of the art methods are implemented in Data Centers dedicated to remote sensing, climate, or ocean observations.
Show/Hide Description
CCS.26: GRSS ESI/HDCRS - Scalable Parallel Computing for Remote Sensing
T/A.19: AI and Big Data — Data Management Systems and Computing Platforms in Remote Sensing
Recent advances in remote sensors with higher spectral, spatial, and temporal resolutions have significantly increased data volumes, which pose a challenge to process and analyse the resulting massive data in a timely fashion to support practical applications. Meanwhile, the development of computationally demanding algorithms and methods (e.g., Machine Learning (ML) and Deep Learning (DL) techniques) demand for parallel implementations with high scalability performance. Therefore, parallel and scalable computing architectures and algorithms have become indispensable tools to deal with the challenges posed by applications from geoscience and remote sensing. In recent years, high-performance and distributed computing have been rapidly advanced in terms of hardware architectures and software. For instance, the popular graphics processing unit (GPU) has evolved into a highly parallel many-core processor with tremendous computing power and high memory bandwidth. Moreover, recent High Performance Computing (HPC) architectures and parallel programming have been influenced by the rapid advancement of DL and hardware accelerators as modern GPUs. This community session on scalable parallel computing for remote sensing aims at gathering a collection of papers in the most advanced and trendy areas interested in exploiting new high-performance and distributed computing technologies and algorithms to expedite the processing and analysis of big remote sensing data.
Show/Hide Description
CCS.27: Large-scale earth observation data management and feature extraction from remote sensing images
T/A.19: AI and Big Data — Data Management Systems and Computing Platforms in Remote Sensing
We have now entered the era of high-resolution remote sensing, which can be combined with the capacity of satellites to perform numerous daily revisits and cover the entire world. The An ever usage of multispectral, hyperspectral, and all-weather sensors has exponentially increased the complexity of remotely sensed images for users of downstream applications. The unprecedented expansion of scope of remote sensing applications pose difficulties in collecting, analysing, and interpreting the data in order to derive relevant conclusions. High Performance Computing (HPC), which is based on clusters of computers, is still widely used but also presents a number of obstacles. When it comes to earth observation data, standard big data approaches are significantly different from earth observation data because of the vast geographical spread, increased dimensionality, and related metadata. In the not too distant future, the requirement for real-time processing and temporal analysis will make it necessary to have robust data management and computing systems. 

Big data is a vast collection of data sets whose volume and velocity are at a scale that is far greater than the most advanced systems and revolutionize the method in which problems are solved. Big data in the context of remote sensing refers to not only the volume and velocity of the data, which exceed the capacity for storage and calculation, but also the diversity and complexity of the remote sensing data. As of the year 2020, there have been approximately 900 earth observation satellites sent into orbit, and it is anticipated that this number will rise in the next decade. There are 200 of which have multispectral and multitemporal sensors. Accessing and making use of remote sensing data is made much more difficult by the wide variety of sensors and data kinds that it contains. This is because it is used in such a wide variety of application domains. 

The goal of this session is to highlight the difficulties and potential benefits that are linked with remote sensing big data management systems and computing platforms. During the workshop, participants would get an overview of massive EO data management platforms like Sentinel Hub, Open Data Cube, Google Earth Engine, openEO, and a few other alternatives. In the recent past, we have seen the scientific community benefit from computing platforms; nevertheless, from the point of view of industry and its potential reach to end users, it is debatable whether or not this will happen. From the perspective of the end user, a debate is presented that provides an overview of the scope, advantages, and limitations of various platforms. Discussions about cloud native approaches to remote sensing data management at scale combined with computing facilities will take place during the presentation's final phase. It will be difficult to build a platform that completely satisfies all of the capability requirements needed for all application domains
Show/Hide Description
CCS.28: Open Science platforms and the role of remote sensing
T/A.19: AI and Big Data — Data Management Systems and Computing Platforms in Remote Sensing
Across a wide spectrum of organizations, Open Science is recognized as an important catalyst for innovation. Based on core principles that include accessibility, reproducibility, inclusiveness, and transparency, Open Science enabled by technology and sound practices creates the premises for better, faster scientific research results as well as higher acceptance and impact on society. Focusing on the disciplines of Geosciences and Remote Sensing, Open Science is presently found at various degrees of maturity as well as applicability. In the general realm of Earth Observation, it has so far been implemented mostly at a project level in the context of tensions between transparency, privacy, and propriety. However, dedicated Open Science initiatives and technology development are beginning to emerge that address these tensions and provide ways forward. 
Addressing global challenges and answering complex research questions about the Earth System rely on scientific communities working across disciplinary and institutional boundaries, supported by effective access to inter-agency, cross-community Earth Observation Science data, knowledge, and computing infrastructures. Open Science can bring enormous value to these endeavors by offering interoperable systems working across domains with heterogeneous data that “make the scientific process as transparent (or open) as possible by making all elements of a claimed discovery readily accessible, enabling results to be repeated and validated” [NASA Earth]. 
Earthquakes, droughts, floods, forest fires: The earth's surface is shaped by an interaction of forces that is impressively revealed especially during such events. Understanding these complex interactions and the implications for people living today and in the future requires an interdisciplinary approach across existing institutional and administrative boundaries. Understanding the role of  geospatial processes in these interactions requires the integration of Earth Observation results with data, information, and approaches from other communities. The collaboration of the Earth Observation community with social scientists, statisticians, and economists provides the basis for understanding geospheric processes in the context of a growing world population and a globally organized economy. 
This session will explore the latest developments in the field of Open Science. It will discuss the particular role of remote sensing and its integration into a multi-stakeholder environment. 
Show/Hide Description
CCS.29: Integration of remote sensing with in-situ data at different scales
T/A.20: AI and Big Data — IoT in Geoscience and Remote Sensing
In any knowledge application domain such as urban digital twin modeling, climate monitoring, agricultural production prediction, or environmental impact assessment, a key challenge is to develop a shared data infrastructure that integrates remote sensing data with IoT and reporting data to support critical decisions and actions. Of particular interest  in this context is the role of scale. How can observations from the small-scale satellite realm be transferred to larger scales, such as those required when considering cities or parts of them? This session will analyze challenges with integrating remote sensing and in-situ data that is collected at different scales to support applications at other specific scales.. It will investigate aspects such as transferability of results from one location to another, Analysis Ready Data and Decision Ready Information products that result from IoT-based in-situ measurements and remote sensing data, and the general production of knowledge from multi-sensor data fusion. The session will investigate to which extent it is possible to form a “single information space” in a remote sensing and in-situ data realm that is heterogeneous in space, time, and observed phenomena.
Show/Hide Description
CCS.30: Advances in Spatio-Temporal Datacube Management, Visualization, Analytics and AI (IEEE ESI Committee session)
T/A.22: AI and Big Data — Data Analytics and AI Techniques in Remote Sensing
Datacubes are an accepted cornerstone for providing spatio-temporal Big Earth Data in a more analysis-ready and user-friendly way.
Homogenization of pixels and voxels into topical multi-dimensional datacubes reduces the zillions of files to a smaller set of cubes, enables seamless navigation through space-time, and generally represents a more human-centric data paradigm. Standards (such as OGC Coverage data and services and ISO 19123-x) are in place or converging, and large-scale services are known which work effectively on multi-Petabyte plan etary-scale federations. Machine Learning recently is teaming up with datacubes for offering new insights.

So far, so good - but at the same time there remain substantial research questions. The nature of both time and height/depth coordinate reference systems is not yet understood to the extent necessary for the services; multi-temporal ML has not been addressed sufficiently; and services still require too much technical knowledge which should be hidden behind the curtain of services - just to name a few open issues.

In this session we collect the state of the art and present advances as well as questions, thereby stimulating discussion and pointing (not only young) researchers to promising research fields. All session organisers and speakers gather long-standing Big Earth Data experience in academia, industry and public services.
Show/Hide Description
CCS.31: AI-based Methods for EO data compression
T/A.22: AI and Big Data — Data Analytics and AI Techniques in Remote Sensing
Earth Observation (EO) sensors are acquiring Big Data volumes at very high data rates (e.g., the Copernicus missions produce 12 TB of data per day). In particular, next generation SAR systems will offer a quantum leap in performance using large bandwidths and digital beam forming techniques in combination with multiple acquisition channels. These innovative spaceborne radar techniques have been introduced to overcome the limitations imposed by classical SAR imaging for the acquisition of wide swaths and, at the same time, of finer resolutions, and they are currently being widely applied in studies, technology developments and even mission concepts conducted at various space agencies and industries. Such significant developments in terms of system capabilities are clearly associated with the generation of large volumes of data to be gathered in a shorter time interval, which, in turn, implies harder requirements for the onboard memory and downlink capacity of the system. Similar considerations can be drawn with respect to optical sensors, such as multispectral and hyperspectral ones, which provide nowadays large amounts of images at high resolution. Therefore, the proper quantization/compression of the acquired data prior to downlink to the ground is of utmost importance, as it defines, on the one hand, the amount of onboard data and, on the other hand, it directly affects the quality of the generated EO products.

EO data show unique features posing important challenges and potentials, such as learning the data models for optimal compression to preserve data quality and to avoid artefacts hindering further analysis. For instance, based on the peculiarities of the imaged scene (e.g., in radar imaging these are characterized by the reflectivity, polarization, incidence angle, but also by the specific system architecture, which may offer opportunities for efficient data quantization; differently, multispectral data are characterized by the land cover or the presence of clouds), a more efficient data representation can be achieved by searching for the best quantizer and the ad-hoc tuning of the inner quantization parameters. Additionally, onboard preprocessing of the acquired data to a sparse domain (e.g., range compression in the case of SAR data) can also lead to a more compact data representation. 

Artificial Intelligence (AI) represents one of the most promising approaches in the remote sensing community, enabling scalable exploration of big data and bringing new insights on information retrieval solutions. In the past three decades the EO data compression field progressed slowly, but the recent advances in AI are now opening the perspective of a change of paradigm in data compression. AI algorithms and onboard processing could be exploited to generate/discover novel and more compact data representations. Thus, with this session we would like to bring to the field new methodologies for both loss-less and lossy compression of remote sensing data. Several data compression topics are welcomed to the session, which include (but are not limited to): Kolmogorov complexity-based algorithms, source coding with side information, neural data compression, compression of correlated sources, integrated classification and compression, semantic coding, big data compression and application-oriented compression.
Show/Hide Description
CCS.32: Applied Deep Learning Applications in Remote Sensing Data Analysis
T/A.22: AI and Big Data — Data Analytics and AI Techniques in Remote Sensing
Applications with remote sensing data greatly affect climate change, environment, commerce and energy strategies. As we know, machine learning technologies has made great achievements in remote sensing data processing in recent years. In the meanwhile, the continuously increasing power of sensors is a great help for acquiring high resolutions remote sensing images, both spectrally, spatially and temporally. Nowadays, deep learning techniques had received unprecedented popularity and researchers are very interested in discussing whether such technologies can be practically applied to remote sensing applications. In this regard, it is a challenging task to process large remote sensing data effectively and efficiently. Therefore, this session aims to present emerging technologies related to deep learning and artificial intelligence that may have a significant impact on remote sensing applications.
Show/Hide Description
CCS.33: Big Data and AI Technologies for Earth System Digital Twins
T/A.22: AI and Big Data — Data Analytics and AI Techniques in Remote Sensing
Digital Twins of Earth systems are being developed that will help us understand the complex interconnections among Earth systems and their impacts. A digital twin is a “digital replica” that can tell us about the current state of Earth systems, forecast how they will evolve in the future, and conduct hypothetical “what-if” scenarios to understand their impact under different assumptions. Earth System digital twins can also consider information or digital twins about human systems, such as power, cities, agriculture and infrastructure to better understand impacts and interactions.

Big Data and AI technologies will play a significant role in realizing these capabilities. Digital twins of interconnected Earth systems involve complex models that with huge data volumes and a wide variety of data sources. These systems need to run efficiently at scale, fuse and assimilate a wide variety of data, and support analysis and visualization to understand the interactions and impacts.

This session will explore AI and Big Data technologies for Earth system Digital Twins, including:
•	Big data technologies for running digital twins efficiently at scale
•	ML surrogate modeling for Earth System Digital Twins
•	AI/ML technologies for prediction and what-if projection from Digital Twins
•	Data fusion and assimilation of heterogenous data at multiple scales
•	Digital twin visualizations to understand interacting Earth systems and their impacts
•	Earth system digital twins enabled by AI/ML and Big Data technologies
Show/Hide Description
CCS.34: Data- and Model-Federation for Large-Scale Earth Observation Analytics
T/A.22: AI and Big Data — Data Analytics and AI Techniques in Remote Sensing
Geospatial data easily aggregates terabyte-scale volumes per day. Management of such amounts of information is expensive and typically shared between different locations and hosted by various organisations. 
Most state-of-the-art algorithms that generate scientific and business insights, however, require all data to be co-located. 
“Data gravity” characterises this challenge of physically moving data into a single High Performance Computing center for fusion. 
To overcome this constraint on large geospatial data, there are two distinct alternatives: (1) Distributed data must be represented to and managed for the algorithms in a unified manner, i.e. through data federation; or (2) Novel algorithms train on distributed data at “the edge”, locally, only  exchanging small amounts of model (meta-)parameters between locations, i.e. implementing model federation.
Specifically, the latter avoids the costly and time-consuming raw-data-transfer from one location to the other. This holds the promise for creating time-critical insights in a sustainable manner, even on a potentially heterogenous network of resource constraint nodes, as e.g., on a network of orbiting satellites.

In this session we will discuss schemes and technologies that facilitate unified views of data and models at the user interface, while underneath, federation and distributed algorithms and data technologies manage:

•	data storage and transport, 
•	data-access schemes, 
•	privacy and authentication, 
•	model training and execution (inference)
•	model distribution and unification

Motivation:	
Societal challenges due to natural resource management and increasingly severe natural hazards require Earth Observation and Geoscience data to be made available for scalable analytics. While machine-learning benchmark datasets can be handled by current cloud data centers for experimentation at scale, the amount of data necessary in Geoscience is overwhelming for existing schemes and infrastructures. We invite the large-scale Earth observation analytics community to join forces in discussing, to conceptualise, and to implement open-science, open-source distributed compute schemes. Federated infrastructures are key to building scalable machine learning models for a series of global challenges ahead.
Show/Hide Description
CCS.35: Datasets and Benchmarking in Remote Sensing
T/A.22: AI and Big Data — Data Analytics and AI Techniques in Remote Sensing
As in most research fields, deep learning has significantly impacted remote sensing (RS). However, developing well-generalizing deep learning techniques for extracting geoinformation from remote sensing pictures has traditionally depended on the availability of large-scale, carefully curated training data. One of the significant difficulties in this situation is undoubtedly the fact that RS image annotation is typically a much more difficult task than it is for everyday photography interpretation. It is usually due to the overhead nature of remote sensing imagery interpretation and its reliance on various imaging technologies and electromagnetic spectrum regions.
Thanks to the effort of the RS community, many specialized datasets for remote sensing are being published. These datasets cover the entire range of relevant remote sensing sensors and address various purposes, including object detection, scene categorization, and semantic segmentation of land cover. On the other hand, it becomes increasingly difficult to thoroughly understand the available datasets and choose which dataset will be helpful for the desired activity. Additionally, the majority of datasets still seem somewhat academic, and it is yet unclear which datasets have already influenced the transfer of remote sensing technologies into operational settings.
As traditionally seen in the invited sessions on Datasets and Benchmarking of previous IGARSS conferences, the purpose of this invited CCS is to provide light on the current state of remote sensing datasets. Additionally, it will serve as a discussion forum for issues like benchmarking, comparability, usability, and user-friendliness. It solicits reports on ongoing projects or the in-depth analysis of existing datasets, in addition to papers on the publication of novel datasets.
Show/Hide Description
CCS.36: Explainable AI for Geohazard Mapping and Modelling using Geodata
T/A.22: AI and Big Data — Data Analytics and AI Techniques in Remote Sensing
The practical applications of data-driven studies towards geohazards (landslides, debris flows, earthquakes, droughts, floods, and glacier studies) have gained interest. Given the relative advantages of physical techniques and AI models, the geoscience community is increasingly considering the synergy of the two paradigms as an appealing study area. Despite the rapid increase in the usage of AI techniques in recent years, it has been emphasised that their application should be treated cautiously and go hand in hand with the creation of best practices for their application. 
AI has been used for accurately mapping and modelling the Geohazards.
Increasing model complexity has frequently been used to attain better accuracy. The deep learning paradigm, which is at the core of the majority of cutting-edge machine learning systems, serves as a prime example. In order to perform tasks like detection or classification, hierarchical data representations must be automatically discovered, learned, and extracted by machines. Inherently, the ability of these complex systems to explain their inner workings and operations is reduced due to the hierarchy of increasing complexity, the utilisation of large amounts of data, and the training and development of such sophisticated systems. As a result, their judgments' justifications are difficult to comprehend, making it difficult to interpret their predictions. 
EXplainable Artificial Intelligence (XAI) is a field focused on the understanding and interpretation of the behaviour of AI systems. explainable artificial intelligence still has unexplored aspects in domain of Remote sensing and Geoscience, which need to be explored.
This session is focussed on the emerging domain of the Application of Explainable AI for Geohazard mapping and modelling.
Show/Hide Description
CCS.37: Geophysical data analysis and inversion in the era of artificial intelligence
T/A.22: AI and Big Data — Data Analytics and AI Techniques in Remote Sensing
Geophysical sensing methods (e.g., seismic, electromagnetic, gravity) are important indirect tools for imaging the interior of the Earth. Recently, artificial intelligence (AI) techniques have gained interest in geophysical data analysis and inversion, automating many steps that require enormous human efforts in conventional frameworks. The advancement of computational resources and the automation of geophysical inversion procedures through machine learning allow us to process large volumes of data with different scales and provide fast and reliable data processing and imaging of the subsurface. In addition, new sensing techniques like drone-based surveys, nodal geophones, and distributed acoustic sensing enable a sampling ratio that was unprecedentedly high in the geophysical acquisition, which increases the data volume exponentially. The emergence of high-density geophysical observations and fast-expanding data volumes call for efficient processing and analysis techniques from cross-domains. More importantly, in the era of AI, the integration between geophysical methods and other remote sensing techniques must become seamless with the help of AI algorithms and platforms. This session aims to provide an avenue for cross-discipline engineers, scientists, educators, and entrepreneurs to share and communicate the latest developments in a broad topic of geophysical data sensing, processing, and inversion. In addition to the four confirmed invited talks, we welcome the contributions of conference abstracts on the advanced usage of AI in geosciences and engineering applications, including, but not limited to: 
1) Machine learning-based signal processing of dense passive and distributed acoustic sensing data.
2) Physics-guided machine learning methods for inverting underground structures and lithologic properties.
 3) Physics-based modeling methods for improving and accelerating large-scale geophysical inversion.
 4) Training data generation techniques that improve the generalization ability of machine learning inversion models. 
5) Machine learning-based solutions for engineering applications such as drilling optimization and field development. 
6) Machine learning algorithms for solving geophysical problems. 
Show/Hide Description
CCS.38: Large AI Models for EO
T/A.22: AI and Big Data — Data Analytics and AI Techniques in Remote Sensing
Large image-language AI models are now leading the way not only for many vision and natural language tasks but even just for standard image processing tasks such as image classification. What is the impact of this ongoing revolution on Earth observation? 

This session will review current research on the application of large AI models models (image-only, language-only, and/or image-language) in remote sensing and discuss the potential of these models for the future.

Topics include:
-	EO applications and tasks (e.g., classification, change detection, etc.) leveraging large AI models
-	Self-supervised learning and semi-supervised learning with large pretrained models in EO
-	Few-shot, zero shot learning with large AI models
-	Transfer learning, domain adaptation of large AI models in EO
-	Generative modelling in EO with text-aligned models (e.g., with Imagen, Dall-e, diffusion models, etc.)
-	Visual-Question Answering
-	Natural Language Processing applications in EO
-	Training large AI models for EO at scale

Show/Hide Description
CCS.39: Machine Learning for GNSS Remote Sensing
T/A.22: AI and Big Data — Data Analytics and AI Techniques in Remote Sensing
Machine learning is recognized as the hierarchical mathematical/statistical architecture of subfunctions and operators acting as a biological brain in computers to perform a task. The learning/training is technically the process of finding the optimal set of parameters, typically node weights and hyperparameters, within this architecture so that it performs the desired task. The emergence of big datasets and powerful processing units led to the creation of multi-layer neural network architectures with deep learning capability, made up of a variety of processing units including core layers and those for convolution, pooling, recurrence, regularization, merging, activation, and attention. This modern technology has also raised a new perspective for the remote sensing community, with novel methodologies for different remote sensing and Earth system modeling tasks. However, a major drawback can be the dependence of machine learning performance on large datasets representing the physics and the systems well.
GNSS remote sensing exploits existing signals transmitted from satellite systems initially developed for positioning and navigation services. The GNSS guarantees visibility to multiple GNSS satellites at every point over the globe, which leads to a desirably high spatial coverage and sampling frequency. The GNSS remote sensing techniques, including GNSS Reflectometry (GNSS-R), Radio Occultation, and sounding approaches, require only the receiver component. The number of GNSS receivers for remote sensing in regional and global ground networks, as well as onboard small satellites and CubeSat constellations, has been recently increasing rapidly. The derived data offer great potential for "learning" and the effective development of data-driven algorithms.  In addition, the theoretical models might not be well validated in different field conditions due to the novelty of the domain and, therefore, machine learning can assist us in enhanced retrieval and modeling algorithms. This modern processing technique is well-established in other remote sensing domains, especially for the processing of optical Earth observations. However, potentials are not optimally unlocked in GNSS remote sensing despite the great synergy. This session focuses on leveraging machine learning approaches to foster these innovative and interdisciplinary developments in GNSS Earth observations and system modeling. These include studies implementing the machine learning methods for:
•	GNSS remote sensing data fusion with other sensors/techniques,
•	retrieval algorithms and knowledge extraction from the data,
•	hybrid GNSS remote sensing data assimilation,
•	explainable modeling and knowledge extraction,
•	long-term data homogenization and processing for climate studies,
•	Earth system modeling and forecasts using GNSS data,
•	and hazards monitoring and warning systems based on the data.
Show/Hide Description
CCS.40: Opening the Black Box: Explainable AI/ML in Remote Sensing Analysis
T/A.22: AI and Big Data — Data Analytics and AI Techniques in Remote Sensing
“They just work” is a common argument for the use of artificial intelligence and machine learning (AI/ML) systems for inference. However, it is not sufficient for applications where an AI system is asked to detect rare or important events that are associated with high-consequence decision making, which is often the case in our Geoscience and Remote Sensing community. Remote sensing end users, i.e., humans who have to decide what to do with what their algorithms tell them, have been rightfully reluctant to adopt the “black box” class of machine learning algorithms – uninterpretable algorithms that have otherwise been shown to be effective in fields outside of remote sensing. This reluctance to “black box” methods is because model explainability and interpretability are critically important for an end user that must act, with confidence, on an AI system’s predictions.

Nevertheless, AI/ML techniques are becoming increasingly more ubiquitous in remote sensing analysis, particularly with the ever-growing volume of remote sensing data in both public and commercial sectors. And because many of the research results are promising, an increased emphasis on explainability is all the more urgent. Regardless of the specific application or modality within remote sensing, AI systems in our community need to not only be predictable, but they also need to be transparent in their reasoning, and they need to be interpretable. 

Because remote sensing data is so closely tied to the physics of the signature generation and measurement, the challenges that we face are unique when compared to typical machine-vision AI/ML applications. For example, although a neural network may have statistically exceptional performance, a few false detections that violate well-known physical laws can cause subject-matter experts to lose all confidence in the output.

In our Community Contributed Session, we will emphasize techniques for end-to-end, traceable explainability – starting with the model training inputs, through the system architecture, and to the outputs, while considering domain information throughout. In doing so, we will open the “black box” that typically obfuscates insights into AI/ML models in remote sensing. Our session will cover a variety of applications, both in terms of image type (hyperspectral, SAR, sonar) and in terms of analysis type (classification, target detection, target identification). Because AI/ML explainability methods can apply to different domains within remote sensing, we hope that this session will demonstrate the transferability of techniques as well as enable cross-pollination of new ideas.
Show/Hide Description
CCS.41: Physics Informed Artificial Intelligence for Synthetic Aperture Radar Applications
T/A.22: AI and Big Data — Data Analytics and AI Techniques in Remote Sensing
The Earth is experiencing unprecedented climatic, geomorphologic, environmental, and anthropogenic changes, which require long-term global Earth Observation (EO) sensor monitoring. Due to their ability to observe day and night and independence from atmospheric effects, SAR sensors are the only EO technology that can guarantee global and continuous observations. SAR, unlike optical sensors, transmits pulsed signals and afterwards receives echoes reflected from objects. The 2-dimensional SAR image is then created by focusing using signal processing techniques, exhibiting the scattering behaviors of the observed field's interactions with the electromagnetic wave.

A deluge of SAR sensors has increased the availability of data for diverse SAR applications. The allure of data-driven learning originates from its ability to automatically extract abstract features from massive data volumes; hence, a considerable number of deep learning researches for SAR applications have been conducted during the past few years. Current popular methods predominantly follows the data-driven paradigm, where a large number of SAR image data is all you need to drive an intelligent network.  This would address the issue of insufficient data, physical inconsistency of the prediction, and lack of interpretability. To achieve explainability, physics awareness, and trustworthiness, it is crucial to explore the hybrid modeling paradigm of intelligent SAR image interpretation for various applications, where deep learning is integrated and interacted with SAR physical models and principles.

The session calls for papers aiming to ground novel hybrid physics-AI paradigms addressing the specific challenges of SAR, i.e. physical parameter retrieval/estimation from radar data, physical EM scattering properties understanding and explanation, target simulation or generation, image interpretation, etc. The session will encompass hybrid explainable ML or DNN methods integrating the physics knowledge of SAR with machine learning models, such as using theoretical constraints to reduce the parameter spaces of deep learning, physics-constrained parameter inversion methods, sensor principles based parameter retrieval, or embedding the forward models for the inversion processes. The interferometric SAR, polarimetric SAR, SAR tomography, microwave imaging echos, and complex-valued images, where a wide range of physical models are covered, The articles could address topics as the SAR imaging systems, signal processing, 3-D reconstruction, agricultural forecast, oceanology, sea-ice monitoring, or automatic target recognition for Earth or Planetary Observations.

Last year, we planned a successful special session at IGARSS 2022 titled "Physics-Aware Machine Learning for Synthetic Aperture Radar" with nine pertinent articles. We were honored to note that the participants from 11 different universities and research institutes around the world presented a variety of fresh concepts and original work, advancing the SAR society in this upward path. We further expect that our proposal for the Community Contributed Session at IGARSS 2023 will foster the ongoing advancement of AI technology for SAR applications.
Show/Hide Description
CCS.42: Remote Sensing Tools and methods for nature-based carbon sequestration measurement and verification
T/A.22: AI and Big Data — Data Analytics and AI Techniques in Remote Sensing
Nature-based carbon sequestration has a high potential to mitigate the impact of climate change because of the vast amount of storage capacity in soil and vegetation. Regenerative agriculture, carbon farming, and reforestation are the most common methods that are used to extract CO2 from the atmosphere and sequester it in soil and biomass. Due to the diversity of measurement and modeling tools, there is a need for consensual methods to quantify the carbon stored. Such methods can underpin the development of a trustworthy digital platform enabling carbon markets. Remote sensing tools like satellites and aerial observations are at the forefront of observation, quantification and monitoring of the extent and distribution of biomass in forests and are also being explored as tools to quantify carbon in soil. Massive amount of optical, radar, Lidar and field sampling data that has already been acquired will need new AI tools and algorithms to fuse and harmonize, and these data will need to be validated and compared with models to quantify the uncertainties. This session will present cutting edge methods and approaches in big data and AI to quantify the above and below ground carbon sequestration and is aimed to foster collaboration and exchange of ideas on this evolving and dynamic topic. 
Show/Hide Description
CCS.43: SAR Advanced Applications based on Deep Learning: urban, land and water applications
T/A.22: AI and Big Data — Data Analytics and AI Techniques in Remote Sensing
In the last forty years plenty of algorithms and solution have been defined for fully exploiting the potential of SAR images and extracting information by SAR data. Therefore, SAR processing has been assessed as one of the main important tools for the Earth Observation. Thanks to their coherent nature and to the ability of working in any meteorological conditions, SAR data are being intensively used for many different  applications such as 3D reconstruction, feature extraction, land cover, target recognition, scatterer detection, data fusion and many others, within different domains (urban, natural, marine, etc…). 
Moreover, with the improvement of the technology and the availability of new and advances sensors,  more updated data, with a shorter revisiting time, are available: algorithms able to easily and rapidly process the great amount of data are fundamental nowadays. For these reasons, the SAR processing domain is observing a shift from traditional model-based solution to the deep learning (DL) based one. 
Following the impressive performance of DL methods in different domain of the signal, image and data processing, SAR community is investing a lot of research into its feasibility for the SAR domain.
The rapid spread of DL methods in the SAR domain has risen many questions and issue that are still open and are worth of being deeply investigated:
- Is it possible to define a DL method for all the applications?
- How create/obtain extensive dataset for training DL methods?
- Supervised, self-supervised or unsupervised…many names but what is the best choice?
- What about DL based methods generalization in testing case?
- Is transfer learning from others domain suitable for SAR?
- How to define and control the DL method for the SAR?
- Does feature extraction analysis help in understanding the function of the DL?
- How uncertainties can be captured by the network?
- Can DL be the reply for the so many different sensors available now?
 
The aim of this session is to present the state-of-art of DL based methods in different SAR applications and domains and stimulate a discussion on what is worthy to be investigated in the near future.
Show/Hide Description
CCS.44: Scaling GeoAI for Rapid Disaster Response and Humanitarian Applications
T/A.22: AI and Big Data — Data Analytics and AI Techniques in Remote Sensing
The research direction of leveraging Artificial Intelligence (AI) for Geosciences and Remote Sensing has opened up exciting avenues and has proven to benefit a wide spectrum of applications. Disaster response and humanitarian applications require rapid analysis of large-scale Earth Observation (EO) data to generate actionable insights.  Given the tremendous volume of EO data coupled with its increasing availability, there is need to develop scalable Geospatial Artificial Intelligence (GeoAI) solutions that specifically cater to the demand of disaster response and humanitarian applications.  Scalability in this context refers to the ability to rapidly process and analyze large-scale EO data through the development of accelerated and optimized GeoAI methodologies in addition to leveraging high performance computing hardware. Although advancements in computing hardware have led to the development of machine learning for high-performance computing, it is not feasible to subscribe to the idea of unlimited storage and compute. Subsequently there is a dire need to develop scalable and accelerated GeoAI workflows for effective resource utilization.

This session seeks to host contributions focusing on novel scalable GeoAI systems targeted for disaster response and humanitarian applications. Topics in large-scale machine learning for high-performance computing methodologies, scalable optimization strategies and search systems for aiding humanitarian response efforts, disaster assessment and monitoring applications are invited to contribute to this session. Given the deep societal impact of research on scalable GeoAI for disaster response and humanitarian applications, this session would serve as a great platform to bridge the gap between humanitarian practitioners, policymakers and researchers in IEEE GRSS community to discuss and collaborate on the latest research trends in this focus area.
Show/Hide Description
CCS.45: At the boundaries of radio propagation and remote sensing: A tribute to Prof. Frank Silvio Marzano
T/M.23: Modeling — Electromagnetic Modeling
Prof. Frank Silvio Marzano, an IEEE Fellow since 2016 for his contributions to microwave remote sensing in meteorology and volcanology, and professor of Electromagnetic Field at the Engineering Faculty of Sapienza University of Rome, suddenly passed away on 8 May 2022. In his professional and educational roles, he addressed a wide range of research activities, from radio propagation to remote sensing, meteorology and volcanology.
The principal topic of the session is to communicate how Prof. Marzano conceived his research in geoscience and remote sensing, also inspired by his mentor Prof. Giovanni d’Auria. It starts from the knowledge of electromagnetic interaction models rooted in fundamental radio propagation studies. This provides fundamental tools to gather geophysical observations needed to sustain and improve human lives and society through an extension of wireless communication capability toward higher frequency bands, as well as advances in monitoring the health of our planet through proper inversion of those electromagnetic models. However, very deep knowledge of the disciplines involved in the final applications is also required to make this effective, as demonstrated by the role of Prof. Marzano in meteorology and other environmental disciplines. 
Invited papers can not only focus on those aspects, but also demonstrate the impact of the approach on the society and environmental studies, with particular emphasis on meteorology, clouds and precipitation and volcanology. This session is in memory of an excellent scientist and dear friend, but it is also a chance for the remote sensing community to reaffirm and further demonstrate the value of this fundamental approach in geoscience and remote sensing. 
Show/Hide Description
CCS.46: GNSS-R Modeling Part 1: Land
T/M.27: Modeling — EM Modeling for Signals of Opportunity (e.g. GNSS-R)
This 3-part session is focused on recent advances made for modeling GNSS land reflections. The session will give a picture about the state of the art of current activities and their impact on data processing and understanding. This is meant to be Part 1 of 3 in a larger session about GNSS-R modeling,  organized in the frame of the activities of the GNSS-R Modeling Subgroup of the Modeling In Remote Sensing (MIRS) technical committee of the IEEE GRS society. 

Part 1 focuses on land GNSS-R modeling, Part 2 focuses on the study of the GNSS-R scattering, and Part 3 on inland water and inundation.  

The potential of GNSS reflectometry has been mainly investigated for the study of the ocean, resulting in new space missions, currently operating or under study. The use of signals GNSS reflected by land has attracted significant scientific and industrial interest. The possibility of retrieving soil moisture, vegetation biomass, or monitoring the freeze/thaw state and inundations has been confirmed and several research groups are currently involved in studies based on the exploitation of data produced by the NASA CyGNSS mission. Even if significant progress has been made in consolidating the physics describing the interaction between navigation signals and land bio-geophysical parameters, an accurate understanding demands further efforts.
Show/Hide Description
CCS.47: GNSS-R Modeling Part 2: Scattering Properties
T/M.27: Modeling — EM Modeling for Signals of Opportunity (e.g. GNSS-R)
Extensive research is ongoing to characterize the nature of the scattering in GNSS-R systems when land regions are illuminated, as well as to study the sensitivity of the scattered signals with respect to biogeophysical variables (such as, e.g., soil moisture and freeze thaw). Even though significant progress has been made in the last few years, many questions are still open, and the phenomenology of the scattering problem is not completely understood.

 In this frame, a dedicated session could stimulate interesting discussions and new paths for future research activities, especially to provide more insight advanced strategies to process data from current and future GNSS-R missions.

This session is intended to be Part 2 of 3 in a series focused on GNSS-R Modeling that was organized in the frame of the activities of the GNSS-R Modeling Subgroup of the Modeling In Remote Sensing (MIRS) technical committee of the GRS society. Part 1 focuses on land GNSS-R modeling, Part 2 GNSS-R scattering, and Part 3 on inland water.
Show/Hide Description
CCS.48: GNSS-R Modeling Part 3: Inland Water
T/M.27: Modeling — EM Modeling for Signals of Opportunity (e.g. GNSS-R)
This session is intended to be Part 3 of 3 in a series focused on GNSS-R Modeling that is organized in the frame of the activities of the GNSS-R Modeling Subgroup of the Modeling In Remote Sensing (MIRS) technical committee of the IEEE GRS society. 

The ability of GNSS-R satellite to detect and map the dynamics of inland water at rapid timescales is of significant scientific and public interest, especially in the view of future programs to evaluate hazards associated with flooding events in developing countries. 

Recent measurements of GNSS reflections over land have shown high sensitivity to the presence of inland waters and inundations. GNSS signals reflected from  lakes,  rivers,  wetlands,  and  other  inland  water  bodies,  produce coherent, or partially coherent,  reflections.  Such measurements exhibit desirable features, such as higher SNR, improved spatial resolution (that, given the scattering phenomenology is related to the scattering regime), and phase information.  Science applications using these reflections are under study: the possibility of implementing surface water detection and mapping as well as radar altimetry using the carrier phase information is under study.  

This session is meant to provide the community with a frame to present and discuss the current state-of-the art in modeling of GNSS-R measurements over inland water.  Both derivation of analytical models as well as data-driven empirical investigations are expected. 

The invited speakers will stimulate discussions about the progress in the field of modeling and new ideas about the use of coherent GNSS-R reflections for understanding surface water and inundation.  
Show/Hide Description
CCS.49: Microwave Remote Sensing of Snow
C.1: Cryosphere — Snow Cover
Multiple microwave remote sensing techniques have been demonstrated to have potential for quantitative  global monitoring of  snow properties, such as SWE, albedo , snow wetness etc . The current available sensors include the active sensors of Radarsat-2, Sentinel-1, TerraSAR-X, Cosmo-skymed,  and passive sensors of GMI, SSM/IS, AMSR-2, FY-3.  The planned and potential satellite missions include L Band NISAR, P-band signals of opportunities (SNOOPI), TSMM radar scattering at  Ku band, volume scattering approach at X band and Ku band, etc.   The planned and proposed missions will further enhance the microwave remote sensing applications in snow properties monitoring. This session seeks new scientific results that can contribute to 1) improving our understanding in modeling microwave signals, 2) analyses of satellite , airborne and field measurements response to snow properties; 3) algorithms  for  inversion of  snow properties and 4) the field  and airborne experiments for validation of microwave models and inversion algorithms. 
Show/Hide Description
CCS.50: Advanced observations of sea ice by satellite SAR in a changing climate
C.3: Cryosphere — Sea Ice
Sea ice in polar regions can be dramatically affected by climate change, with thinning sea ice, longer melt seasons and increased new ice formation and it is crucial to monitor these changes. 

Synthetic aperture radar (SAR) is an invaluable remote sensing sensor with wide-spatial coverage over hundreds of kilometers and high spatial resolution capabilities at meter scales. Moreover, a SAR actively illuminates scenes at microwave frequency regardless of weather and darkness, offering an excellent opportunity to monitor polar regions. 
With the development of SAR polarimetric and interferometric techniques, many studies have demonstrated SAR as a promising tool to characterize sea ice properties through scattering mechanisms, estimate sea ice roughness, identify sea ice classes, and retrieve sea ice elevations. 

This session focuses on new techniques to monitor sea ice and the latest observations of sea ice in a changing climate using spaceborne SAR. New techniques include advanced algorithms and models to characterize sea ice using various polarizations and frequencies. The session covers the latest observations of sea ice in both Arctic and Antarctic regions, where the sea ice properties can be significantly different due to the diversity in growth conditions. The session also underlines the evolution of sea ice in recent decades and the effects of climate change on the sea ice characteristics.

The aim of this session is to bring new ideas to achieve operational and synergistic sea-ice monitoring from the multiple SAR missions in the upcoming years. Specifically, the spaceborne SAR missions such as the TSX/TDX and the future LOTUSat Missions at X-band, the Sentinel-1 and RADARSAT Constellation at C-band, and the upcoming NISAR Mission at L and S bands, provide optimal conditions for observing sea ice over extensive spatial coverage at shorter revisit time. Therefore, the session allows the community to obtain the state-of-the-art sea ice observations and to develop more advanced sea-ice monitoring methods in the era of big SAR data.

The session also provides an opportunity for scientists to discuss possible future field campaigns (e.g., icebreaker cruise measurements) coordinated with the satellite SAR data acquisitions. Such measurements can be used as reference data and would be highly valuable for developing accurate algorithms from SAR imagery. Last but not least, the session aims to show the latest sea ice observations under a changing climate to help understand the geophysical processes of sea ice and to assess the impacts of sea ice change in a changing climate.
Show/Hide Description
CCS.51: Advances in monitoring and characterizing terrestrial ecosystem monitoring using remote sensing data
L.2: Land Applications — Land Cover Dynamics
Remote sensing information has been used in a wide range of Earth science research and applications including over land, water, cryosphere, ecosystems, and geology. Recent advances in remote sensing, including the thermal infrared (i.e., TIR or thermal IR) waveband, has facilitated the use of large volumes of satellite and climate data to conduct land change and impact assessments across large geographic areas at multiple temporal scales. These assessments can improve environmental modeling efforts and advance our understanding of the drivers and the impacts of land cover and land use change on ecosystems, climate, and public health. 
This session highlights advances in remote sensing of land system change including but not limited to land cover/use, urban development and heat island effect, wildfire and drought effects, coastal change, and ecosystem responses to climate change across regional and large geographical areas. We seek contributions including multi-temporal datasets, innovative analytical frameworks, and models to advance our scientific understanding of the types, trends, magnitudes, and consequences related to changes in land cover/use and application of ecosystem and climate monitoring capabilities. Research, applications, and services relying on thermal remotely sensed data, with a strong focus on innovative approaches addressing societal challenges and cross-domain research, from local to global scales will also be considered. 
Potential session topics include, but are not limited to:
•	New methods to extrapolate land surface and near surface air temperatures 
•	Land cover and land use change detection and monitoring
•	Heat stress associated urban Heat Island Effect
•	Public health applications and dissemination obstacles
•	AI/ML applied to land surface temperature estimation and applications
•	Improvement in surface energy balance
•	Standardization of long term reference data records;
•	Near-real time monitoring and Time Series Analysis of thermal datasets;
•	Cross-domain applications (e.g. land/water; urban/forest);
•	Thermal and multispectral data fusion;
•	Ground validation of Land Surface Temperature, Evapotranspiration, and Emissivity datasets;
•	Applications of thermal data to land cover/land use change and geologic applications.
Show/Hide Description
CCS.52: Vegetation growth dynamics associated with water cycles
L.2: Land Applications — Land Cover Dynamics
Phenology studies the periodic events in biological life cycles and how these are influenced by seasonal and interannual variations in climate and environment factors. Remote sensing data and products derived from satellites and near-surface phenocams are the most popular approach to monitor the shifts in land surface phenolgy and to study the interactions between vegetation phenology and its driving forces. Shifts in land surface phenological metrics including onsets of spring growth and autumn senescence reflect the response of terrestrial ecosystems to climate change and other environmental forces. Various studies have suggested that phenological shifts such as advanced start of spring (SOS) are closely related to the warming climate trend and increasing carbon uptake across the globe. Phenology is also well known as a significant limiting factor for ecosystem functioning and ecological services. For example, extension in growing season length would result in increased gross primary production. While the ecosystem carbon cycle has been widely investigated in relation to interannual variations in land surface phenology, the impact of shifts in vegetation phenological metrics on water cycles has not been clearly understood. Vegetation phenology and hydrology are in fact closely linked through the interactions across a variety of spatial scales from individual plant species, population, communities to global biomes and over different time periods. According to results from land surface process models, advanced SOS has led to warming temperatures in the north hemisphere. This warming trend was found to be more related to water vapor transport towards polar regions. In addition, more indirect effect such as legacy effect of phenological shifts on water cycles are also reported. With regard to feedbacks of vegetation phenology shifts to water cycles, research has showed in the northern hemisphere summer soil drying was intensified by earlier onset of spring. Another important research on phenology-hydrology interactions presents similar results that the earlier onset of spring greenness increase caused greater water stress in summer, which then reduced the vegetation productivity. In recent years, these following questions have been drawing the attention of geographers and hydrologists
1)	Are there differences in the relation between phenology and water cycle across different vegetated areas (e.g. tropical forests and alpine grasslands)?
2)	Between urban and natural regions, are there differences in the effects of phenology on the hydrological cycle?
3)	Do spring phenology and autumn phenology have different effects on the water cycles? 
4)	How have the moisture-related global climate extremes affected vegetation growth?  
5)	What’s the effect of vegetation growth on local water cycles?
Addressing these questions using remote sensing would complement the current knowledge of phenological implications on hydrology and is crucial to a better understanding of phenology feedback to terrestrial carbon and water cycles. 
Show/Hide Description
CCS.53: Forest Dynamics and Disturbances
L.3: Land Applications — Forest and Vegetation: Application and Modelling
The effects of climate change on forests are becoming more and more evident. Such effects are present at different spatial scales and can have both short- and long-term impacts. Prolonged variations in temperature and precipitation can have impacts that become visible over the years or decades, like the changes in the tree line in mountain forest, in the species composition or in the start of the growing season. Climate change is also increasing extreme events such as storms and droughts that can cause both direct (e.g., windthrow) and indirect (e.g., insect outbreaks, wildfires) forest disturbances.
In terms of remote sensing data and processing, these two types of impacts need different approaches: long term impacts should be monitored using time series archival datasets such as Landsat and MODIS, while short term impacts require both archival datasets and immediate post event acquisitions. Long archival time series should be used to identify past events, better understand their pre-event (e.g., causes) and post-event dynamics (e.g., forest recovery), while immediate post-event acquisitions are needed to quantify damages and to be reactive in putting into place mitigation strategies. Such near real-time monitoring is becoming more and more possible thanks to the medium/high spatial resolution high revisit frequency constellations such as Sentinel 2 and Planet Dove. SAR images can also have a relevant role thanks to their low sensitivity to atmospheric conditions thus making them an optimal choice for timely detections.
Such applications require the processing of a large amount of data especially when working at large scale thus entering the big data domain. This requires additional effort to be devoted to the data management and implementation that can be facilitated using ad-hoc cloud computing platforms such as Google Earth Engine.
Thus, in this session we expect presentations that have as topic: i) the development of new techniques to analyse long term time series over forests; ii) the development of new techniques to detect local forest disturbances; and more broadly iii) the analysis of climatic effects on forest areas using remote sensing data.
It is worth noting that the presented methodologies can have an impact not only on the forestry community but also on other research areas that exploit remote sensing data. Indeed, time series analysis is applicable to a wide range of applications such as agriculture/food security and urban area monitoring. Accordingly, methods focused on the detection of trends and variations in a temporal trajectory can be adapted and applied to such applications.
Show/Hide Description
CCS.54: From Need to Product: Recent Advances in Mapping and Validation of Vegetation Biophysical Parameters at Regional to Global Extents
L.3: Land Applications — Forest and Vegetation: Application and Modelling
Over the last 20 years systems such as the NASA Earth Observing System and the Copernicus Global Land Service have provided global coarse resolution maps of vegetation biophysical parameters targeted at requirements from the Global Climate Observing System.  Increasingly, downstream services, researchers and NGOs are looking for enhancements in spatial resolution, thematic performance and additional parameters to meet wider requirements in agriculture, sustainable development and biodiversity.  This session will address four pressing questions:

1.  What are the emerging requirements for systematic vegetation mapping over large areas?
2.  What are gaps in current observing systems to address existing and new requirements?
3. What  is required in terms of new observing systems, science and computing to meet these requirements?
4.  How can we exploit new developments in high resolution constellations, UAVs and in-situ networks to quantify the validate new ,mapping systems?  

This session will involve lead scientists of current global vegetation mapping,  new earth explorer systems, automated validation networks, open source and cloud computing  and CEOS working groups on calibration and validation..  

The targeted outcomes of the session are:

1.  A specification of new vegetation parameters and associated use requirements and satellite inputs as a supplement to GCOS climate variables.

2.  Provide CEOS working groups with priority validation requirements including needs for product or system certification.

3,  A survey paper identifying "low hanging fruit" from new earth explorers and imaging constellations, new proximal remote sensing tools, and  that have high potential for improving current mapping systems (e.g. synergy of tree height maps from LIDAR and biochemistry from hyperspectral imagers to improve retrievals of other parameters from multi-spectral imagers;; exploiting sensors on mobile devices, surveillance networks and vehicles to validate and improve algorithms; cloud computing to enable free and open access to algorithms and data production on demand).

Show/Hide Description
CCS.55: Machine learning for forest monitoring
L.3: Land Applications — Forest and Vegetation: Application and Modelling
Forests cover one third of the land on the Earth. It plays an essential role in water and carbon circulating, increasing biodiversity, providing food and energy for both animals and human beings. Several existing and future satellite missions are developed for forest monitoring purpose. The coming space-borne P-band full-polarimetric SAR mission BIOMASS is designed to monitoring the current state and dynamic change of the forests. The GEDI mission, which carries a laser altimeter, is able to constructing the 3D forest structure and quantifying the CO2 concentration. Forest monitoring has attracted a lot of attention in remote sensing community. In the IEEE Xplore Digital Library alone, the number of publications with keyword “forest monitoring” has increased from 797 in 2010-2015 to 1614 in 2015-2020. Popular research topics include deforestation area detection and segmentation, forest height estimation, forest biomass quantification, etc. The popularity of machine learning based methods has received a lot of attention in recent years, which has a great potential to boost forest applications as well. In this session, we will discuss the recent developed machine learning algorithms for forest monitoring, their superiority as well as their gaps and pitfalls.
Show/Hide Description
CCS.56: MULTIFREQUENCY MICROWAVE APPLICATIONS TO SOIL AND VEGETATION : OBSERVATIONS AND MODELING
L.3: Land Applications — Forest and Vegetation: Application and Modelling
The role of soil moisture and vegetation in the climate system has been studied by the climate research community, showing an enhanced understanding after the addition of remotely sensed products. The essential role of soil moisture in the climate system motivated the Global Climate Observing System (GCOS) and the European Space Agency (ESA) to endorse soil moisture as an Essential Climate Variable (ECV) and introduce it to their Climate Change Initiative programme. Microwave observations from space-borne sensors are ideal for soil moisture retrieval and a variety of microwave sensors from both active and passive systems has been observing the Earth’s surface since the late 1970s with this aim. These microwave observations can be further used for the retrieval of land surface parameters such as vegetation water content, soil moisture, and land surface temperature, and may ultimately be integrated into existing (long-term) data products.
In this session some approaches based on microwave observations for estimating soil and vegetation parameters will be described. Both active and passive sensors will be used together with newly implemented algorithms and models.
Show/Hide Description
CCS.57: Large-scale forest aboveground biomass and height mapping with the combination of spaceborne radar and lidar/optical sensors
L.4: Land Applications — Forest and Vegetation: Biomass and Carbon Cycle
Since forest structure is of great value to terrestrial ecology, habitat biodiversity, and global carbon storage assessments, it is desired to monitor and quantify the state of, and change in aboveground biomass and forest height (that well correlates with biomass) along with other forest biophysical characteristics. It is important to generate large-scale (e.g., state,  continental, and global) moderate-resolution (e.g., few hectares down to sub-hectare) products of forest aboveground biomass and forest height. Forest height is not only a carbon storage metric that well correlates with biomass but also a critical variable identified by international forest monitoring efforts such as Global Observation for Forest Cover and Land Dynamics (GOFC-GOLD), Global Forest Observations Initiative (GFOI), and programs such as the United Nations’ Reducing Emissions from Deforestation and Forest Degradation (REDD+) programme. This information can also support efforts aimed at quantifying biodiversity, particularly given the rapid declines and losses of many plant and animal species world-wide.

To address this scientific goal, the remote sensing community has been working towards combining multi-sensor measurements, such as Synthetic Aperture Radar (SAR) and lidar. For example, these include JAXA's ALOS/ALOS-2/ALOS-4 (single L-band SAR) and MOLI (lidar) missions, NASA'S NISAR (single L-band SAR) and GEDI (lidar), DLR's TanDEM-X (twin X-band SAR), ESA's BIOMASS (single P-band SAR) as well as China's LT-1 (twin L-band SAR) and TECIS (lidar). 

SAR has the capability of all weather and day/night observation. In the fusion observing scenario, the high spatial and temporal resolution of radar would be combined with the high confidence of vertical structure, yet more sparse coverage, measured by lidar and/or passive optical sensor. It is obvious that radar (and passive optical) missions have complete spatial coverage with good spatial resolution; however, the interpretation of the radar data is more difficult leading to a moderate accuracy in measuring the vertical structure. In contrast, the lidar mission has a sparse spatial coverage with small footprint size while the lidar data interpretation is much easier than radar resulting in a much better vertical confidence. Therefore, this proposal will demonstrate a few novel ways to combine the complete spatial coverage of radar and precise vertical measurements of lidar so that large-scale (potentially global-scale) forest height/biomass maps can be generated. In particular, it is desired to develop novel scientific algorithms for determining the vegetation vertical structure and forest aboveground biomass/height through the fusion of multiple spaceborne SAR, including Interferometric SAR (InSAR), Polarimetric InSAR (PolInSAR) and Tomographic SAR (TomoSAR) data, as well as lidar and/or optical data.

The large-scale forest height mosaic product can be used to create aboveground biomass maps, which will account for the terrestrial carbon storage and model the dynamics of carbon cycle. This piece of information is useful not only for addressing the climate change related scientific issues, but also for answering the question about the missing sink of terrestrial carbon storage. This information will help developing countries to better understand and quantify the carbon emission as well as their decision making regarding the United Nations’ REDD+ programme.
Show/Hide Description
CCS.58: Mapping global croplands for food and water security in the twenty-first century using big-data analytics, machine learning and cloud computing
L.5: Land Applications — Agriculture
World population has increased from meagre 4 million 12,000 years ago to 8 billion in 2022 (a 2000-fold increase) and will reach 11 billion by 2100. How did the world feed itself, especially during the rapid raise in population from 2.5 billion at the end of the second World War to over 8 billion at present? How will the world feed itself in the future, especially considering climate variability\change, pandemics, and wars? 

This community contributed session (CCS) will bring together scientists involved in global agricultural cropland mapping at highest known resolution (30m or better) using Earth Observation (EO) data in support of world’s food and water security analysis. The session will focus on paradigm-shift in producing myriad global cropland products involving multiple satellite sensor derived petabyte-scale big-data analytics of the Planet, machine learning/deep learning (ML/DL), artificial intelligence (AI), and cloud computing on platforms such as the Google Earth Engine or Amazon Web Services (AWS). The session will present and discuss various global cropland products such as cropland extent, watering method (irrigated or rainfed), cropping intensities, crop types, and crop water productivity produced using EO data at highest known resolution. 
Show/Hide Description
CCS.59: Optimising resources use efficiency in agricultural systems
L.5: Land Applications — Agriculture
Nitrogen (N) and water are the main limiting factors in agricultural soils and, therefore they are frequently applied for optimizing crop traits. However, overfertilization is a common practice and only about half of the N applied is assimilated by the crops. 
Remote sensing imagery can reduce the environmental impact of agricultural practices by detecting crop N and water status for site-specific fertilization and irrigation. Also, remote sensing techniques can be used for early prediction of crop production in order to plan the harvest and to ensure food security. Spectral information based on canopy reflectance are commonly used for assessing different crop parameters depending on the region used.  However, the interaction between N and water status may produce confounding effects in the acquired spectral reflectance, making it difficult to separate crop deficiencies. Other sensors, like thermal or radar can contribute to offer useful information related to crop parameters.
In addition, recent techniques such as artificial intelligence, are able to enhance remote sensing accuracy. 
Show/Hide Description
CCS.60: Synergistic use of Remote Sensing, IoT, and AI in Agriculture
L.5: Land Applications — Agriculture
Agriculture around the world needs attention and more so in developing and under-developed nations. The agriculture practices are inclined towards demand-supply imbalance. Unplanned and untapped conventional cropping practices are leading to an abundance of one commodity and scarcity of another. This becomes more severe under increased population, disasters, and urbanization/industrialization events affecting the produce.
A sustainable and generalized framework is sought to address the need of the hour.
Articles are invited on the synergistic use of the three most promising technologies today i.e. remote sensing (RS), artificial intelligence (AI), and internet-of-things (IoT for various agriculture applications. Key applications of synergistic use of RS, AI, and IoT technologies include: - 
•	Monitoring and mapping cropping patterns/practices via multi-modality, synced (IoT—RS) data:  Synced IoT-enabled sensor and RS imagery data are fruitful in the precise monitoring of cropping practices/patterns over time. The data synced data may lead to information critical optimization in cropping patterns. 
•	IoT-AI driven efficient early crop yield estimations/predictions with RS imagery: Time series RS imagery and IoT sensor data pushed to an AI-powered early yield prediction system should help efficient management and distribution.
•	Climate change impact assessment on seasonal crop practices with time-series RS imagery and deep learning.
•	Abiotic stress mapping such as floods, droughts, and extremes in croplands via RS and IoT-enabled sensors.
•	Mapping biotic stress in crops via high-resolution RS imagery and complementary in-situ data.
•	RS, IoT, and AI-powered decision support systems for precision agriculture and focused crop practices/cropping patterns
•	Short/long term policy building via historical RS data and records.
•	Environmental impact assessment of recent agriculture practices in developing/underdeveloped nations
Show/Hide Description
CCS.61: Utility of Remote Sensing Tools for Water Management of Woody Perennial Crops in a Changing Climate
L.5: Land Applications — Agriculture
Water-limited regions across the globe are facing serious water shortages due to drought and overdraft of major aquifers. Regulated deficit irrigation by monitoring actual water use is key to improve water use efficiency, conserve water, and lead to sustainable production, while remaining economically viable.  Woody perennials occupy over 50 M hectares worldwide and have a significant impact on agricultural water use and carbon storage. Perennial crops in California alone cover over 1.2 M hectares, which is largely irrigated, supplying two-thirds of the country’s fruits and nuts. Moreover, California produces 80% of the world’s almonds and makes over 80% of all U.S. wine and is the world's 4th leading wine producer. Therefore, it is imperative to develop remote sensing tools as the only viable means for improving water use efficiency from field to regional scales, and not only for California but water limited regions across the globe facing severe water shortages due to extended drought and where most woody perennial nut and fruit crops are grown. This session will request contributions involving the development of remote sensing tools for estimating actual crop water use and stress that can be used to improve irrigation scheduling, water use efficiency and conservation of perennial crops. 
Show/Hide Description
CCS.62: InSAR-based applications for the resilience of urban areas and infrastructures to natural and anthropogenic hazards
L.6: Land Applications — Urban and Built Environment
Three decades after the first uses of SAR images (from ERS1) SAR Interferometry is an effective tool for Earth surface monitoring thanks to the wide availability of SAR satellite missions and advanced processing
techniques. Multitemporal Synthetic Aperture Radar Interferometry (MT-InSAR) are nowadays reliable techniques to detect and measure the effects of both natural phenomena (such as earthquakes, volcanic activity,
landslides, etc.) and anthropogenic activities (such as exploitation of groundwater, hydrocarbons, gas storage, etc.) on urban areas and critical infrastructures.
The number of applications of InSAR techniques has been constantly increasing: in the first decade mostly standard InSAR techniques were implemented – i.e. pre- and post-event image pair; in the second decade the main focus was on monitoring phenomena such as “seismic cycle”, “volcanic inflation-deflation”, “urban subsidence”, etc.; finally, in the third decade MT-InSAR technique has become a key instrument in Earth
Sciences for measuring and monitoring the impact of natural and anthropogenic induced deformations and impacts on urban areas and infrastructures.
The Session is open to applications of InSAR and MT-InSAR in urban areas with special focus on applications for monitoring the response of infrastructures, including, among others: transport infrastructures such as
railways, airports, road networks and their critical components such as bridges and viaducts, waterways;
potable and sewage water networks as well as drainage networks; electric power transmission and distribution systems; gas and oil transmission and distribution systems and telecommunication networks, when subjected
to natural and anthropogenic hazards. InSAR and MT-InSAR data can in fact provide scientists and decision makers with data and information useful to inform pre-event decision making processes on mitigation actions
and strategies and to effectively inform emergency management and response planning in post-disaster circumstances. Worldwide case studies and applications based on the synergic use of SAR remote sensing,
post processing analysis, and in situ data or other sensors integrations devoted to the enhance the resilience of critical infrastructure are warmly welcome. 
Show/Hide Description
CCS.63: Remote sensing data applied to mineral deposits: a new era for critical raw material deposits studies
L.7: Land Applications — Topography, Geology and Geomorphology
Satellite-based remote sensing has played an important role in the early stages of mineral exploration and its present contribution is key for a sustainable, low-impact energy transition. In the past, distinct product types and numerous image processing algorithms have allowed targeting exploration areas worldwide. After the failure of ASTER’s shortwave infrared (SWIR) module, new approaches have been developed, with commercial satellites such as WorldView-3, presenting a better compromise between spatial and spectral resolution. Despite lacking thermal bands (fundamental for mineral mapping of quartz and other silicates), WorldView-3 presents a similar  SWIR spectral resolution to that of ASTER. 
Recently, hyperspectral data has been more used in mineral deposit studies, either in the exploration phase (e.g. using drone-borne sensors) or even during the exploitation phase with hyperspectral imaging of drill cores (core scanners). Furthermore, a new age of satellite hyperspectral is starting with the very recent EnMAP and PRISMA data. Likewise, the Hyperspectral Imager Suite (HISUI) onboard the International Space Station is another alternative for hyperspectral data. This provides space for new applications and contributions to the state of the art.
Simultaneously, we have observed increased use of remote sensing data in non-traditional geological deposits such as diamond, bauxite, evaporite minerals, lithium, and rare earth elements (REE), for example, due to current paradigm shifts in the global decarbonization and related technological advances that create a higher demand for critical raw materials. Consecutively, there is a growing integration of geological/geophysical and remote sensing data, often using non-parametric methods such as machine and deep learning algorithms.
Therefore, in this Community contributed session, we are looking for innovative remote sensing studies, making use of new remote sensing data and/or machine and deep learning algorithms for non-traditional minerals deposits and welcome works focused on data integration. 
Show/Hide Description
CCS.64: NASA Soil Moisture Active Passive Mission Observations and Results
L.8: Land Applications — Soils and Soil Moisture
The National Aeronautics and Space Administration’s (NASA) Soil Moisture Active Passive (SMAP) mission, the first Earth Science Decadal Survey mission, was launched January 31, 2015 to provide high-resolution, frequent-revisit global mapping of soil moisture and freeze/thaw state. The primary science goal of SMAP is to provide new perspectives on how the three fundamental cycles of the Earth system, the water, energy and carbon cycles, are linked together over land. Soil moisture is the key variable that links the three cycles and makes their co-variations synchronous in time. 
SMAP was designed to include L-band radar and radiometer measurements sharing a rotating 6-meter mesh reflector antenna.  The instruments operate onboard the SMAP spacecraft in a 685-km Sun-synchronous near-polar orbit, viewing the surface at a constant 40-degree incidence angle with a 1000-km swath width for a global revisit in 2-3 days.  The radiometer has been operating since April 2015 with no issues. The radar operated from April to July 7, 2015. Since 2017, the European Union’s Copernicus Sentinel-1 Synthetic Aperture Radar data has been used as a replacement for disaggregation of SMAP radiometer data into 3 km spatial resolution. In addition, the SMAP radar receivers have been repurposed to record the reflected signals from Global Navigation Satellite System (GNSS) satellites (SMAP-R). The SMAP mission completed its prime mission operation in June 2018, and has been extended by NASA for operation through 2023. 

In this special session the status of SMAP mission, plan for extension through 2026, highlights of SMAP science results, and new science algorithms will be presented. The SMAP science data product suite of geophysical parameters includes estimates of surface (top 5 cm) and root-zone (down to 1-m depth) soil moisture, net ecosystem exchange (NEE), and classification of the predominant frozen/non-frozen state of the landscape.  The radiometer has advanced capability to detect Radio Frequency Interference (RFI). The radiometer hardware and processing software are also designed to mitigate some RFI contamination. As a result, the land coverage and accuracy of the surface soil moisture retrievals are greatly enhanced. SMAP data products have several key practical applications that affect society through applied science, and key results will be provided. The invited papers will detail global analysis of SMAP radar data and SMAP-reflectometry data, which will be critical to advance the synergistic use of NASA-ISRO Synthetic Aperture Radar and GNSS-R datasets. The algorithm improvements include updated waterbody map, usage of data quality flag, soil moisture and vegetation optical depth for temperate forests and estimation of seasonal free/thaw dynamics.
Show/Hide Description
CCS.65: Characterizing wetland vegetation structure, biomass, and change with SAR, LIDAR, & multi/hyperspectral fusion
L.9: Land Applications — Wetlands
Wetland ecosystem function and composition are closely linked to vegetation structure - the vertical and horizontal distribution and heterogeneity of plants and their total aboveground volume or mass (biomass). Vegetation structural measurements (e.g., height) and biomass can therefore be used to assess and monitor important ecosystem properties like biodiversity and richness, productivity, carbon and nutrient stocks and fluxes, which are critical for assessing the impact of changing climatic conditions and human disturbance on these sensitive and at-risk ecosystems. Following several decades of research, remote sensing based approaches, while typically introducing greater uncertainties than in situ measurements and destructive sampling, have proven effective for resolving spatial uncertainties. However, most vegetation structure and biomass research has focused on forests, leaving many questions about the transferability of methods to wetlands, especially in face of several additional challenges, including a lack of training data, background reflectance of dead biomass, water, and soil, as well as land cover heterogeneity. During this session we will examine current trends and best practices for estimating vegetation structure and biomass in wetlands, with a focus on multi-sensor fusion of SAR, LiDAR and multi/hyperspectral optical data, as well as the expected utility of data from current and future space missions.
Show/Hide Description
CCS.66: Earth Observations for data-driven strategies to improve prediction of floods and their impacts
L.10: Land Applications — Inland Waters
This session focuses on the use of remote sensing data for calibration and data assimilation for an improved description of floods and their impacts.
As flooding accounts for the majority of all global weather-related hazards, disaster risk reduction for hydrometeorological hazards receives significant attention worldwide. Several international initiatives have thus joined efforts in monitoring and modelling river hydrodynamics, in order to provide Decision Support System services with accurate flood forecasts. Currently, decision making and risk assessment both rely on the use of Earth observations and hydrodynamic models. River hydrodynamic models, that solve the Shallow Water equations, are used to predict river water surface elevation and discharge and therefore allow the assessment of flood risks.
 
Flood simulation and forecast capability have been greatly improved thanks to data-driven strategies that combine heterogeneous and numerous observations with numerical models. As remotely-sensed data products have experienced unprecedented improvements in their quality and spatio-temporal coverage, their use now allows a fine description of model inputs such as vegetation classes or high-resolution river channel and floodplain geometry. The combination of multi-sources and multi-scales remote sensing observations now provides an informed and high-quality description of the hydraulic state as well as for a more reliable forecast. As of today, Earth Observation from space provides information on both the geophysical characteristics of floods, such as their extents and water surface elevation, as well as non-geophysical socioeconomic data that enables an improved understanding of vulnerability, damage assessment; thus, enhancing our capability to reduce exposures.
 
Data assimilation optimally combines uncertain model trajectory and uncertain observations.   It can be seen as a physics-informed interpolator in time and space, providing improved model parameters, forcing and state variables, further used to issue a more reliable forecast. Active and passive remote sensing satellites offer the potential to address knowledge gaps for river and water surface geometry, and dynamics. Multi-satellite remote sensing data now overcome the limits of high-resolution, high-frequency and large coverage description of river dynamics for extreme events, providing observations for re-analysis and for near real-time basis for monitoring and prediction, particularly in data-sparse regions.
 
We are soliciting abstracts that leverage the application of remote sensing data such as very-high-resolution images (Pléiades, Planet, TandemX…), high-resolution images (Landsat, Copernicus Sentinel program) and altimetry data (Jason, Sentinel-3, Sentinel-6 and the upcoming SWOT mission) to inform hydrodynamic models providing input data and/or assimilated data to improve flood predictions and estimate river discharge. The session focuses on, but is not limited to, exploring the use of multi-missions remote-sensing data to 1) describe the topography of the river and floodplain, 2) derive vegetation-related friction coefficients, 3) define the interface between wet and dry areas for flood extents, 4) combine the use of EO data and numerical models regarding calibration, data assimilation, and uncertainty reduction, 5) describe water-level for small to medium rivers at high-resolution, 6) assess socio-economic impacts of floods on civil security as well as on public/private assets.
Show/Hide Description
CCS.67: Hydrological Information Acquisition and Analysis Based on Remote Sensing
L.10: Land Applications — Inland Waters
The theme of " Hydrological Information Acquisition and Analysis Based on Remote Sensing " includes but is not limited to the application of remote sensing technology in precipitation, water identification, flow measurement, flood monitoring, soil and water conservation, etc.
The development of distributed hydrological models has put forward higher requirements for the scope and accuracy of hydrological information. The lack of enough high-precision data to quantitatively describe hydrological processes has become the main problem limiting the development of hydrological models. The future development of hydrology largely depends on whether sufficient data needed for model improvement and calibration can be obtained. Due to the advantages of remote sensing technology, such as a large observation range, real-time capabilities, and comprehensive information acquisition, it has become an important approach to obtaining hydrological elements with remote sensing information. With the development of remote sensing technology, multi-platform, multi-temporal and high-resolution remote sensing data constantly appear, and its combination with hydrology and water resources is becoming closer and closer. Remote sensing information sources can be transported and transferred with remote sensing technology, and precipitation space, precipitation distribution, and other parameters can be obtained accurately so that the specific data of precipitation in different areas will be collected. In remote sensing images, water bodies can be easily interpreted, and water body information such as river basin area can be extracted, which provides a reliable basis for water management. The application of remote sensing technology in hydrological monitoring, such as flood warnings, can carry out real-time dynamic detection of flood disasters and provide a reliable basis for disaster control. Remote sensing is also an important means to study water and soil work. Monitoring soil erosion dynamics through remote sensing can effectively prevent the occurrence of soil and water conservation.
The research of remote sensing technology has been well developed and applied in practice. It has been proved that it can improve the accuracy of the hydrological model, and has a positive effect on water resources research. The use of remote sensing technology has expanded the methods of hydrology and water resources research and saved the factors such as manpower, material resources, time, and cost in the research process. Accurate hydrological information can provide a strong scientific basis for the development of hydrological tasks. In conclusion, research on hydrological information acquisition and analysis based on remote sensing can further advance the application of remote sensing technology in water resources management and promote interdisciplinary and system integration research.
Show/Hide Description
CCS.68: Artificial intelligence for radar and satellite remote sensing of precipitation
M.1: Atmosphere Applications — Precipitation and Clouds
Radars and satellites have been the most important remote sensing instrument for observing clouds and precipitation, serving as cornerstones of applications ranging from severe weather warnings to long-term climate monitoring around the world. Both radar and satellite observations are rich in information; but the traditional ways of utilizing them have only been able to extract part of the information due to the nature of the analytical tools. After the deployment of new satellite observation channels as well as the dual-polarization radar technique, the measurements became multi-dimensional, and it is more challenging to use conventional analytical methodologies to fully resolve the hydrometeorological information. In recent years, artificial intelligence (AI) is gaining an increasing interest in scientific applications across all areas of geoscience and remote sensing, including weather radars and satellites. In fact, AI has already been used in radar-based precipitation retrievals long before it became popular in mainstream, such as the use of neural networks for rainfall estimation. It has been proven that AI is effective in extracting information from the large amount of multi-dimensional radar and/or satellite data in practical environments. In this presentation, key elements in radar and satellite remote sensing of precipitation are highlighted through the lens of AI, including multi-dimensional and multi-parameter data interpretation and processing, precipitation identification, classification, and quantification, as well as multi-sensor precipitation data fusion. These elements have engaged very different types of AI technologies. The challenges of applying AI and deep learning techniques in all these applications will be discussed, including i) the balance of data representation in model training; ii) knowledge transfer from one task to another; iii) interpretability of the underlying physics in the AI models; and iv) balance between model complexity, performance, and computational cost. 
Show/Hide Description
CCS.69: Global Precipitation Mission
M.1: Atmosphere Applications — Precipitation and Clouds
GPM is an international collaboration between NASA, JAXA and other international agencies. The GPM mission represents significant advancements in engineering for the satellite’s active radar and passive radiometer instruments. Global precipitation observations at fine temporal and spatial scales provided by GPM lead to scientific advancements and societal benefits. This session will provide details of GPM engineering, and instruments, (first half session) and science and retrieval algorithms (second half session) with emphasis on  extreme events and applications. The GPM mission has touched a large community of scientists, engineers and users and this special session will represent a broad sample  from this group.
Show/Hide Description
CCS.70: Global precipitation Mission : Applications
M.1: Atmosphere Applications — Precipitation and Clouds
GPM is an international collaboration between NASA, JAXA and other international agencies. The GPM mission represents significant advancements in engineering for the satellite’s active radar and passive radiometer instruments. Global precipitation observations at fine temporal and spatial scales provided by GPM lead to scientific advancements and societal benefits. This session will provide details of GPM engineering, and instruments, (first half session) and science and retrieval algorithms (second half session) with emphasis on extreme  weather events and applications. 
Show/Hide Description
CCS.71: Land Data Assimilation of Remote Sensing Terrestrial Data
M.2: Atmosphere Applications — Numerical Weather Prediction and Data Assimilation
Land surface processes play an important role in the earth system because all the physical, biochemical, and ecological processes occurring in the soil, vegetation, and hydrosphere influence the mass and energy exchanges during land–atmosphere interactions. Data assimilation (DA), through optimally combining both dynamical and physical mechanisms with real-time observations, can effectively reduce the estimation uncertainties caused by spatially and temporally sparse observations and poor observed data accuracy.

In recent decades, studies of land data assimilation have become very active, although this topic was proposed later than the assimilation of atmospheric observations. Land data assimilation can implement both in situ observations and remotely sensed data like satellite observation of soil moisture, snow water equivalent (SWE), land surface temperature, and so on to constrain the physical parametrization and initialization of land surface state. Notably, globally satellite‐derived data could provide the data basis for land DA.

Recent studies have shown that assimilating observed or remotely sensed data into land surface models to constrain the vegetation characteristics can improve the simulation ability for terrestrial flux exchanges. Most studies focusing on assimilation in terrestrial systems have tended to add multiple phenological observations to constrain and predict biome variables and further improve model performance. Joint assimilation of surface incident solar radiation, soil moisture, and vegetation dynamics (LAI) into land surface models or crop models is of great importance since it can improve the model results for national food policy and security assessments.

This session focus on successful applications of remotely sensed terrestrial data (including vegetation parameter, land surface temperature, soil moisture, snow, etc) into land surface models or coupled atmospheric models or even earth system forecasting systems, to improve model performance. Advances in land data assimilation using remote sensing terrestrial data into land models (or coupled atmospheric models), and even earth system models. Studies focusing on the comparison of variational algorithms (such as 3DVAR, 4DVAR), sequential algorithms (such as ensemble Kalman Filter (EnKF), Ensemble Adjust Kalman Filter (EAKF), Particle Filter (PF), etc) and their combination (such as 4DEnKF, DrEnKF, etc) conducted to find out the optimal assimilation algorithms. To find out the key physical and biochemical processes and intermediate variables in land-atmosphere interaction, reveal the potential predictability of the coupled climate system model. 

This session invites researchers to share their novel data assimilation using remote sensing terrestrial data for improving model predictions, as well as for gaining insight into the causes and mechanisms for atmosphere-land interaction.
Show/Hide Description
CCS.72: Hyperspectral Microwave Sounder Science and Technology
M.3: Atmosphere Applications — Atmospheric Sounding
Hyperspectral microwave remote sensing has been of interest for nearly two decades with evolving measurement theory, technical approaches and  instrument concepts. Recently, available technology solutions have accelerated interest in hyperspectral measurements. Concurrently, scientific interest in the utility of hyperspectral microwave has arisen. The U.S. Earth Science Decadal Survey recommended a science and technology incubation program for observing Earth’s atmosphere planetary boundary layer (PBL) from space to advance scientific research. Operational agencies are investigating the potential available from hyperspectral microwave to innovate in operational weather forecasting. Numerical studies of hyperspectral microwave sounding have shown potential for improving performance in the mid-troposphere and PBL. In the PBL, information content studies suggest additional vertical degrees-of-freedom are available to hyperspectral sensors vs. conventional banded microwave sounders. Hyperspectral microwave sounding presents a number of interesting challenges in technology, algorithms, and modeling. Challenges include optimal spectral coverage, wideband receiver and spectrometer architecture, radio-frequency environment resilience, sounder calibration, forward radiative transfer modeling, retrieval techniques, and data assimilation approaches. Talks will focus on scientific basis and emerging measurement capabilities.
Show/Hide Description
CCS.73: Planetary Boundary Layer from Space
M.3: Atmosphere Applications — Atmospheric Sounding
The planetary boundary layer (PBL) plays an essential role in weather, air quality, and climate, and it is clear that improved PBL models and parameterizations would lead to significantly better weather and climate forecasts. In the recent 2017 National Academies’ Earth Science Decadal Survey, the PBL is recommended as an incubation targeted observable. In 2021, the NASA PBL Incubation Study Team published a report highlighting the need for a global PBL observing system with a PBL space mission at its core. To solve several of the key weather and climate PBL challenges, there is an urgent need for high-resolution and more accurate global observations of PBL water vapor and temperature structure, and PBL height. These observations are not yet available from space but are within our grasp in the next 10 years. This can be achieved by investing in optimal combinations of different approaches and technologies. In this session, the science and technology aspects of new PBL space observations will be presented and discussed. The different presentations will help provide the optimal rationale to design future PBL observing systems. In particular, different observational approaches and potential combinations of techniques to optimally depict the 3D structure of temperature and water vapor in the PBL will be presented and discussed.
Show/Hide Description
CCS.74: Satellite Atmospheric Sounding: Progress and Innovation
M.3: Atmosphere Applications — Atmospheric Sounding
Satellite atmospheric sounding is indispensable in weather, climate, and environmental science. For weather forecasting, sounding measurements from spaceborne sensors contribute over 60% in reducing forecast errors in terms of forecast sensitivity to observation impact. Compared to ground-based measurements, satellite sounders dominate in providing data in both volume and impact, with dense global coverage under clear and cloudy conditions. Monitoring Earth's environment and natural disasters such as hurricanes, heat waves, and wildfires also need sounding-based information such as moisture and temperature as critical state variables. Satellite sounding has provided consistent and reliable records for studying climate change such as global tropospheric warming. These areas are studied by a lot of GRSS members, who come from a range of universities, institutes, and governments.

The proposed session aims to showcase new progress and bring together scientists in the front line of studying atmospheric sounding, covering conventional sounders, emerging technologies, retrieval algorithms, and future missions. Sounding instruments onboard traditional large satellites such as those operated by NOAA, EUMETSAT and NASA have provided backbone data in support of weather and environmental services. Succeeding sounders, including new NOAA and EUMETSAT LEO satellites, are scheduled for future missions. In the meantime, innovative sensors and technologies are emerging. New CubeSat sounders such as NASA MicroMAS, TROPICS, and TEMPEST-D were launched and exhibited encouraging performance. Hyperspectral microwave sounders such as the microwave photonic instrument funded by NASA are under development. These new sensors are promising with novel designs and advanced electronics. As opposed to conventional LEO and GEO sounders, radio occultation is a different but essential method for sounding the atmosphere and improving weather forecasting. Aside from instrument advances, retrieval algorithms are evolving, which directly impact the scientific applications of sounding data. To accommodate new sounders, retrieval systems need to be adjusted and optimized. The progress of current and new retrieval algorithms will also be discussed in this session. In addition to retrievals, sounding data are assimilated into numerical weather prediction models, and topics on assimilating techniques are welcome.
Show/Hide Description
CCS.75: Advanced Characterization of Biomass Burning Smoke and Mineral Dust Aerosols
M.4: Atmosphere Applications — Aerosols and Atmospheric Chemistry
Satellite observations have been foundational to scientific understanding of global atmospheric aerosols. Past two decades of satellite observations helped develop a good understanding of the atmospheric Aerosol Optical Depth (AOD) characterizing total aerosol loading both over the land and the ocean. Understanding and predicting the radiative effects of aerosols, and their interactions with other components of the Earth climate system, requires a more detailed knowledge of aerosol properties including absorption, size distribution, and vertical profile. Of particular interest are widely varying absorption/composition properties of biomass burning smoke and airborne mineral dust. These properties are also important to the science and applications centered on the human health impacts of the fine and coarse mode aerosol particles. We invite contributions on advanced aerosol remote sensing utilizing any combination of novel multispectral, hyperspectral, multi-angular, polarization, or multi-temporal methods, from satellites or suborbital platforms. 
Show/Hide Description
CCS.76: The role of remote sensing for understanding greenhouse emissions, global stocktake, and air pollution
M.4: Atmosphere Applications — Aerosols and Atmospheric Chemistry
Remote sensing plays an important role in understanding the state of the atmosphere. Satellite data often form the basis for assessing greenhouse gas and other emissions and air pollutants. This session will discuss current issues in global and regional atmospheric assessment. Ways in which the integration of in situ data, data generated by AI processes, and simulation data can improve atmospheric state assessment will be highlighted. This area has gained new momentum, not least with the global stocktake resulting from the Paris Agreement on global climate targets. Air pollution measurements have direct consequences in many aspects of our daily lives, whether through traffic restrictions or the planning of new infrastructure projects.
Show/Hide Description
CCS.77: 3rd IEEE GRSS - Van Allen Foundation Student Grand Challenge
O.1: Oceans — Ocean Biology (Color) and Water Quality
Currently, 8 million metric tons of plastic winds up in the oceans, and microplastics in different forms are present in almost all water systems in the world: streams, rivers, lakes, or oceans, and even in our blood. Detection of marine litter is an urgent task. This was the topic of the 3rd IEEE GRSS Student Grand Challenge, conducted in this occasion with the Van Allen foundation of the Université de Montpellier.

In this invited session the four teams selected will be presenting their progress towards the detection and monitoring of plastic litter and/or microplastics using GNSS-R, hyperspectral imagery, and lidar fluorescence.

The session will be then completed with other presentations from the general submission in this very same topic.

Show/Hide Description
CCS.78: Remote Sensing of Optically Complex Waters
O.1: Oceans — Ocean Biology (Color) and Water Quality
Optically complex waters include coastal, inland and Arctic waters and have a high environmental and economic importance. Monitoring water quality and primary productivity in these vulnerable environments by using remote sensing techniques is in great interest. However, there are several challenges in remote sensing optically complex waters. These include adjacent effect close to shore, high backscatter due to turbidity, absorption from colored dissolved organic matter and eutrophication. Retrieval algorithms often fail when applied to these environments, especially if they are not locally adjusted. 
Advances in sensor technology and algorithm development might contribute to a more accurate remote sensing of optically complex waters. 
The session aims to gather experts in the field to present their scientific results and discuss challenges and solutions with regard to monitoring optically complex waters. 
The session is associated primarily with the “O.1: Ocean Biology (Color) and Water Quality” “L.10: Inland Waters” and “S/I.13: Passive Optical Multi- and Hyperspectral Sensors and Calibration”, and secondarily with the “T/D: Data Analysis” and “T/A: AI and Big Data” IGARSS 2023 themes.
The proposed session would be important to Geoscience and Remote Sensing, because it highlights and focuses on important field in remote sensing. This would likely attract scientists and participants working with remote sensing of optically complex waters and hence further increase the scope of the symposium.
Show/Hide Description
CCS.79: Remote Sensing of Variables and Mesoscale Processes Linking the Ocean and Atmosphere
O.2: Oceans — Ocean Surface Winds and Currents
Recent studies have shown very strong coupling between the ocean and atmosphere on the oceanic and atmospheric mesoscale (approximately 10 to 100 km length scales). The impacts of these scales on ocean processes, weather, and climate have become a topic of wide interest. Recent observations qualitatively confirm some expectations from models, but also demonstrate large departures between models and satellite observations. For example, heat and moisture fluxes from the ocean influence the strength of atmospheric fronts and cyclones, and the ability of storms to make use of this information is strongly linked to the curl of winds (vorticity) which is in turn linked to the curl of ocean surface currents. Similarly, vertical mixing in the ocean (which is a limiting factor for some gas fluxes) is enhanced by these heat fluxes as surface stress depends on winds and currents. Interactions between winds and currents also influence the generation of ocean eddies (reducing the eddy kinetic energy) in a manner that causes less meandering and a more equatorward position of western boundary-current extensions that moves across the mid-latitude oceans. These currents supply heat to storms as well as heat and moisture to countries down wind of these currents. This session focuses on the remote sensing of air/sea interaction related mesoscale processes, evidence of the impacts of these processes, and plans for missions to make these observations.
Show/Hide Description
CCS.80: SST applications and the Group for High Resolution Sea Surface Temperature (GHRSST)
O.3: Oceans — Ocean Temperature and Salinity
Sea surface temperature (SST) is a fundamental physical variable for understanding, quantifying and predicting complex interactions between the ocean and the atmosphere. Such processes determine how heat from the sun is redistributed across the global oceans, directly impacting large- and small-scale weather and climate patterns. The provision of daily maps of global SST for operational systems, climate modeling and the broader scientific community is now a mature and sustained service coordinated by the Group for High Resolution Sea Surface Temperature (GHRSST) and the CEOS SST Virtual Constellation (CEOS SST-VC). 

The Group for High Resolution Sea Surface Temperature (GHRSST) is an open international science group that promotes the application of satellites for monitoring sea surface temperature (SST) by enabling SST data producers, users and scientists to collaborate within an agreed framework of best practices. GHRSST provides a framework for SST data sharing, best practices for data processing and a forum for scientific dialog, bringing SST to the users. GHRSST is led by elected international experts: the GHRSST Science Team. The GHRSST Science Team co-ordinates the data production and related research. Continuous efforts to refine the GHRSST data management structures ensure a functional system that conforms to international directives and results in easy access and guidance for users.

Research and development continues to tackle problems such as instrument calibration, algorithm development, diurnal variability, derivation of high-quality skin and depth temperature, and at areas of specific interest such as the high latitudes and coastal areas. Applications of SST contribute to all the seven societal benefits, including Discovery; Ecosystem Health and Biodiversity; Climate Variability & Change; Water, Food, and Energy Security; Pollution and Human Health; Hazards and Maritime Safety; and the Blue Economy. The aim of this session is to foster the communication between the GHRSST and broader SST user communities with a wide range of topics including products, algorithms and applications.
Show/Hide Description
CCS.81: SAR Monitoring of Hazards on Marine Coastal Environments
O.4: Oceans — Coastal Zones
Coastal marine environments, being invaluable ecosystems and host to many species, are under increasing pressure caused by anthropogenic impacts such as, among others, growing economic use, coastline changes and recreational activities. A continuous monitoring of those environments is of key importance for the identification of natural and manmade hazards, for an understanding of oceanic and atmospheric coastal processes, and eventually for a sustainable development and use of those vulnerable areas. Here, Synthetic Aperture Radar (SAR), because of its high spatial resolution, along with its independence of day- and nighttime and its all-weather capabilities, is one sensor of choice.
This Community Contributed  Session (CCS) will focus on the way, in which SAR sensors can be used for the surveillance of changing marine coastal, environments, and how these sensors can detect and quantify processes and phenomena that are of high relevance for the local fauna and flora, for coastal residents and local authorities, and for a better quantification of hazards caused by global change. These processes and phenomena include, but are not limited to:
·	Coastline changes and coastal morphodynamics
·	Coastal run-off and marine pollution
·	Wind fields and storm events
·	Surface waves and currents
·	Target detection
The CCS is organized in two parts: Part 1 will focus on natural and man-made marine surface films; Part 2 will focus on general parameters that need to be monitored for sustainable coastal developments, including oceanic (wave) and atmospheric (wind) parameters and coastal erosion. Several internationally well renowned experts will contribute to this CCS  and will provide a broad overview of SAR applications that are already, or have good potential to be, used for the surveillance of changing marine environments worldwide. Although the focus of this CCS on the application of SAR, the CCS organizers encourage submissions of non-SAR presentations that complement SAR and lead to a more comprehensive solution to address multiple end-user needs for sustainable development of the marine coastal zone. 
Show/Hide Description
CCS.82: Underwater remote sensing based on sonar technique
O.5: Oceans — Ocean Altimetry
Underwater remote sensing also belongs to Geoscience and Remote Sensing. Compared to other technology, sonar systems can provide much higher resolution images in water. Due to the high resolution, the sonar is a well-established remote sensing technique that has proven to be a useful technique for a large number of applications, ranging from underwater archaeology to underwater mapping and reconnaissance in the field of underwater engineering. The navigation, autonomous underwater vehicle (AUV) and electronics have achieved great progress in recent years. Thanks to these developments, the sonar based remote sensing has also been pushed into a new stage. The lightweight and compact sonar can be mounted ever smaller, highly flexible platforms. Besides, current algorithms are computationally more efficient. Furthermore, the sonar images are further improved based on various methods. Last but not at least, sonar technology is much more preferred by underwater engineering-users due to its high resolution.
Show/Hide Description
CCS.83: Recent Advances on Lunar Missions and Cis-lunar Space Protection
P.1: Remote Sensing of Planetary and other Celestial Bodies — Moon
With the new focus of NASA to go back to the Moon, programs from the United States, such as Artemis, have developed new uncrewed missions (Artemis I) that will help test all the components of NASA’s Space Launch System (SLS), to make future deep space exploration possible before humans make the journey in 2024 and 2025 aboard Artemis II and Artemis III, respectively. China, by contrast, plans to launch two robotic sample-return missions, Chang’e-5 and Chang’e-6. Russia, by continuing its Luna program, intends to land five probes on the Moon, Luna-25 to Luna -29, being Luna-27 a collaboration with the European Space Agency, ESA. Both the Chinese and the Russian missions aim to advance the exploration of the far side of the Moon with the intention to build a fully robotic station there. With the planned increased traffic in the cis-lunar space, space debris will become a greater hazard. As missions end, there is currently no disposal process spacecraft and its payload. Therefore, it is crucial to develop and assess new remote sensing detection techniques for the characterization of space debris near the Moon, ensuring safety to missions and astronauts crossing or within the cis-lunar space. With the growing number of missions to the moon, remote sensing will also be useful in detecting and tracking lost or malfunctioning spacecraft with the possibility of recovering such lost or malfunctioning asset. Before humans make their way to the moon, having this cis-lunar space protection through remote sensing is very important. Ground based radar has been successful in providing orbital debris data in Low Earth Orbit (LEO), reducing risk to astronauts operating in the International Space Station (ISS). This protection needs to be extended to cis-lunar space. Remote sensing techniques for cis-lunar space include but are not limited to: ground-based radar, bistatic ground-to-spacecraft radar, spacecraft-based radars, ground-based optical sensors, and spacecraft-based optical sensors, each with its advantages and disadvantages. 
Show/Hide Description
CCS.84: Applications of very high resolution X-Band SAR data
S/M.1: Mission, Sensors and Calibration — Spaceborne SAR Missions
Dear IGARSS 2023 Technical Program Committee

Please accept our proposal for a community contributed session at IGARSS 2023. The session would focus on work by researchers that have used Capella Space synthetic aperture radar data. Here is a brief description of the proposed session.

In the year and a half since the start of operations, Capella Space has provided a number of researchers with data for variety of studies. These studies have focused on topics such as volcanoes, oceanography, flooding, machine learning, data fusion, and change detection. This invited session will bring together these researchers to show the value that commercial very high-resolution SAR data brings to the geophysical remote sensing community, and to highlight the various innovative ways in which commercial SAR data can be used for geophysical research and to advance remote sensing data processing techniques.  We believe that this session will be a great forum to foster discussion about how commercial small SAR satellites can augment other SAR systems, such as TerraSARX, NISAR, and RADARSAT-2 / RCM.

This session addresses many of the IGARSS 2023 themes such as S/M.1: Spaceborne SAR Missions and S/M.7: New Space Missions, and earth science focused areas such as D/S.5: Risk and Disaster Management (hurricanes), C.3: Sea Ice, as well as methodological areas such as  T/D.14: Change Detection and Temporal Analysis.
Show/Hide Description
CCS.85: Coordination and cooperation of International Spaceborne SAR Missions
S/M.1: Mission, Sensors and Calibration — Spaceborne SAR Missions
nternational Civil Agencies operate a number of satellites that are generating petabytes of synthetic aperture radar data to observe Earth densely in space and time.

Over the last two decades several operational civilian free flying spaceborne SAR missions have been launched by Argentina, Canada, China, ESA, Germany, Italy, Japan, Russia, and commercial companies and are widely used for a large spectrum of applications. They cover spectral ranges in the L-, C-, and X-Bands. In the next decade it is expected that more than a dozen missions will be operating at the same time each one mainly driven by the needs of its agency.

Since each agency designs their mission for specific objectives on unique timelines, coordination of the observation plans of the constellation is generally not easy. However, many of these data sets are available to the science applications community either shortly after acquisition or retrospectively. At the same time, computational technologies are undergoing a revolution in speed, interconnectivity, software maturity and openness, and interoperability. Therefore, coordination across the international community of SAR data providers with respect to efficient data acquisition, planning, science algorithm development, data product standards, framework for science data processing methodologies, data requests and processing resources should, in principle, be possible and would be highly advantageous to all parties. In recent years a number of commercial initiatives have been started aiming to provide services based on SAR images with high geometric and temporal resolution through constellations of small SAR satellites.
To coordinate and discuss about the above topic, two international SAR coordination working group meetings were successfully held at Caltech in Pasadena in May 2018 and at ESA ESRIN in Frascati in September 2022. 

In the proposed session, we will cover the outcome, plans of this coordination, and illustrate the scientific and applications benefits in a number of areas under bilateral and multilateral cooperation, as discussed (1) SAR Applications and Coordination Activities including Present and future data ‐ Visibility and access (L0‐L2), Future imaging systems ‐ Goals, plans, challenges and opportunities and Data exploration ‐ Cal/Val, fusion and assimilation (L3‐L4), (2) Thematic Area including Polarimetric and multi-frequency SAR applications (polarimetric or multi-frequency backscatter intensity and/or polarimetric phase the main measurements. InSAR often useful, but not the main driver), Interferometric SAR applications (interferometric phase the main measurement) and (3) SAR program and mission coordination. 
Show/Hide Description
CCS.86: Small SAR constellation mission & applications in Korea
S/M.1: Mission, Sensors and Calibration — Spaceborne SAR Missions
South Korea is developing small SAR satellites that can monitor the surrounding Peninsula rapidly in all weather and day and night, and plans to launch a total of 40 SAR satellites, starting with the launch of two prototype satellites for verification in 2026. There are three modes with X-band : Spotlight, Strip and ScanSAR modes. Strip mode aims to periodically monitor the entire area of interest with a resolution of lower than 1m and a revisit cycle of 30 minutes. In addition, the Scan mode will monitor the coastal areas with wide-area observation.
When such SAR constellation satellite launched, a system for analyzing and processing a massive volume of images is required. The Korea Aerospace Research Institute is developing an SAR application system so that users can access and analyze in real-time and use it conveniently. The SAR application system will be able to contribute to urgent situational awareness of the ocean around the Korean Peninsula and mitigate for various disasters.
This session aims to introduce Korea’s first SAR satellite constellation mission and the SAR application system. After the introduction of the mission, SAR-based research in various fields will be presented. This session is expected to serve as a bridgehead for research through the previous SAR mission, KOMPSAT-5, and the future SAR satellite constellation mission in Korea. It is also expected to be an opportunity to discuss data application, processing and distribution polocies for the new SAR constellation system.
Show/Hide Description
CCS.87: TanDEM-X: Update and Highlights of the 4D Mission Phase
S/M.1: Mission, Sensors and Calibration — Spaceborne SAR Missions
TanDEM-X (TerraSAR-X add-on for Digital Elevation Measurement) is a German Satellite mission that is successfully operating now already since 2010 and has opened a new era in spaceborne radar remote sensing. A single-pass SAR-interferometer with adjustable baselines in across- and in along-track directions is formed by adding a second (TDX), almost identical spacecraft to TerraSAR-X (TSX) and flying the two satellites in a closely controlled formation. TDX has SAR system parameters which are fully compatible with TSX, allowing not only independent operation from TSX in a mono-static mode, but also synchronized operation (e.g. in a bi-static mode). With typical across-track baselines of 200-600 m DEMs with a spatial resolution of 12 m and relative vertical accuracy of 2 m has been generated. The Helix concept provides a save solution for the close formation flight by combining a vertical separation of the two satellites over the poles with adjustable horizontal baselines at the ascending/descending node crossings. Form TanDEM-X a global high-resolution digital elevation model with vertical accuracies of 2m and spatial resolution of 12m was derived and is available for science use. 
In this session the status of the mission, the future acquisition plans and the new mission products are presented. In addition, an outlook will be given to the scientist of the way forward with the mission objectives, such that the scientist can prepare for future data acquisition and data requests.
Show/Hide Description
CCS.88: The Guts of the NASA-ISRO SAR Mission
S/M.1: Mission, Sensors and Calibration — Spaceborne SAR Missions
The NASA-ISRO SAR (NISAR) Mission is a major international science mission partnership between the US and India.  NISAR has broad science objectives in understanding global change related to climate, anthropogenic land use, and solid earth deformation processes. NISAR will provide an unprecedented view of Earth to the Geoscience and Remote Sensing community, delivering dense time series of L-band SAR data over all land and ice covered surfaces of Earth every 12 days from ascending and descending orbits (average 6 day or better sampling).  Simultaneous S-band data will be acquired over India and its surroundings, over Antarctica, and over other regions around the world, using the same spatial and temporal sampling strategy.  The observations will be ideal for interferometric time series analysis.  Most of the acquisitions will be made in a dual pol mode (single pol over most ice and quad pol over a few regions of interest).  The NISAR radar instruments have a first-of-a-kind phase-array feed reflector system with digital beam forming on transmit and receive.  The L- and S-band radars work together, are synchronized for simultaneous acquisition, and both can acquire dual-pol data at full resolution over a swath of 240 km.  This is achieved using a scan-on-receive technique that allows multiple pulses to be in the transmit beam simultaneously. This session will focus on the instrument characteristics, the acquisition modes and constraints,  the image calibration and validation plans, algorithms for producing SAR, PolSAR, and InSAR products, the science data system that can process 44 terabits of data each day, and the plans for data distribution.  This session will be designed to prepare the community for the upcoming launch, planned for January 2024, so the session is very timely.  It will be paired with proposed first session on the mission characteristics, data products, and science cal/val plans.
Show/Hide Description
CCS.89: The NASA-ISRO SAR Mission is Coming!
S/M.1: Mission, Sensors and Calibration — Spaceborne SAR Missions
The NASA-ISRO SAR (NISAR) Mission is a major international science mission partnership between the US and India.  NISAR has broad science objectives in understanding global change related to climate, anthropogenic land use, and solid earth deformation processes. NISAR will provide an unprecedented view of Earth to the Geoscience and Remote Sensing community, delivering dense time series of L-band SAR data over all land and ice covered surfaces of Earth every 12 days from ascending and descending orbits (average 6 day or better sampling).  Simultaneous S-band data will be acquired over India and its surroundings, over Antarctica, and over other regions around the world, using the same spatial and temporal sampling strategy.  The observations will be ideal for interferometric time series analysis.  Most of the acquisitions will be made in a dual pol mode (single pol over most ice and quad pol over a few regions of interest).  Given this observational approach, NISAR is also well-tuned for reliably acquiring data for urgent response needs, with relatively low latency from acquisition, as well as many applications involving monitoring, such as infrastructure, agriculture, land level change, flood mapping, and forest mapping.  The NISAR radar instrument will look toward the south pole, complementary to most other SAR missions that primarily look to the north.  This session will focus on the mission characteristics, the observation plan,  data products, science and cal/val activities leading up to launch and during commissioning and science operations, and community engagement.  This session will be designed to prepare the community for the upcoming launch, planned for January 2024, so the session is very timely.  It will be paired with proposed second session on the instrument characteristics and processing scenarios needed to create the required data products.
Show/Hide Description
CCS.90: Science in small packages: CubeSat and SmallSat impact on future of remote sensing
S/M.2: Mission, Sensors and Calibration — Spaceborne Passive Microwave Missions
After five years of successful CubeSat sessions at IGARSS, this year (2022) we plan to organize sessions with more diverse participation.  With the advent of CubeSat and SmallSat deployments by both Government and Commercial entities, there is a need to assess their impact on scientific research.  Since 2012 NASA Earth Science Technology Office has been running research program targeted toward technology validation in space, In-Space Validation of Earth Science Technologies (InVEST).   This program encourages flying new technologies and new methods on CubeSat platforms. Recently ESA under their new Space through the FutureEO programme selected four CubeSat proposals. This shows an increased interest in CubeSat based missions. Recently we have been able to gather science grade data from some of our CubeSats. 

Since we have graduated from amateur experiments of building CubeSats, it is  time to look into possible science applications of these platforms. We plan to focus on NASA/InVEST, ESA/Scout and similar programs in other organizations. This will give an opportunity to showcase the latest developments in the remote sensing through smaller platforms.  Principal Investigators will be presenting their latest results.  This will give an opportunity for broader community to get a quick overview of the latest technology in Remote Sensing through CubeSats. 

This session will be high-level forum bringing together scientists from all over the world involved in the research, design, and development of CubeSat based instruments for Remote Sensing Applications. 
Show/Hide Description
CCS.91: WSF-M Mission Status and Calibration (Part I)
S/M.2: Mission, Sensors and Calibration — Spaceborne Passive Microwave Missions
  The Weather System Follow-on Microwave (WSF-M) satellite is the next-generation of Department of Defense (DoD) operational environmental satellite system. WSF-M will be on a sun-synchronous Low Earth Orbit (LEO) orbit with two payloads, a polarimetric Microwave Imager (MWI) and an Energetic Charged Particle (ECP) sensor.  The MWI has a total of 17 channels at frequencies 10, 18, 23, 36 and 89 GHz, of which 10, 18 and 36 GHz are fully polarimetric.  The WSF-M is currently scheduled for launch in late 2023/early 2024.  This is the Part I of the two part session dedicated to WSF-M Mission. This session will provide mission status and Calibration/Validation (Cal/Val) plan, with a focus on discussion of the polarimetric MWI calibration. For details on MWI data processing software and algorithms please refer to Part II . 

The WSF-M predecessor, Defense Meteorological Satellite Program (DMSP), has been providing the Geoscience and Remote Sensing Community with microwave imager data for 30+ years.  The data from DMSP passive microwave radiometer have been used for both weather forecast and climate study.  The WSF-M will continue to produce high resolution of microwave imagery and generate essential weather data products such as ocean surface wind speed and direction, snow depth, sea ice characterization, etc.  We have received many inquiries about the WSF-M mission and MWI data.  This session will provide information to help Geoscience and Remote Sensing Community understand and use the WSF-M MWI data, which will in turn derive better weather products for final data users.
Show/Hide Description
CCS.92: WSF-M Mission Status and Calibration (Part II)
S/M.2: Mission, Sensors and Calibration — Spaceborne Passive Microwave Missions
  The Weather System Follow-on Microwave (WSF-M) satellite is the next-generation of Department of Defense (DoD) operational environmental satellite system. WSF-M will be on a sun-synchronous Low Earth Orbit (LEO) orbit with two payloads, a polarimetric Microwave Imager (MWI) and an Energetic Charged Particle (ECP) sensor.  The MWI has a total of 17 channels at frequencies 10, 18, 23, 36 and 89 GHz, of which 10, 18 and 36 GHz are fully polarimetric.  The WSF-M is currently scheduled for launch in late 2023/early 2024. This is the Part II of the two part session dedicated to WSF-M Mission. This session will provide details of the MWI data processing software, geophysical retrieval algorithms and other topics related to MWI on orbit performance. For mission status, Calibration/Validation (Cal/Val) plan, and discussion on the polarimetric MWI calibration please refer to Part I .

The WSF-M predecessor, Defense Meteorological Satellite Program (DMSP), has been providing the Geoscience and Remote Sensing Community with microwave imager data for 30+ years.  The data from DMSP passive microwave radiometer have been used for both weather forecast and climate study.  The WSF-M will continue to produce high resolution of microwave imagery and generate essential weather data products such as ocean surface wind speed and direction, snow depth, sea ice characterization, etc.  We have received many inquiries about the WSF-M mission and MWI data.  This session will provide information to help Geoscience and Remote Sensing Community understand and use the WSF-M MWI data, which will in turn derive better weather products for final data users.
Show/Hide Description
CCS.93: GNSS-R Spaceborne Missions, Present and Future
S/M.3: Mission, Sensors and Calibration — Spaceborne GNSS-R Missions
Global Navigation Satellite System Reflectometry (GNSS-R) is a form of bistatic radar remote sensing that uses GNSS satellites such as GPS, Galileo and Beidou as the transmitters and measures quasi-specular forward scattered signals with a custom GNSS-R receiver. In recent years, there has been significant growth in research activities related to instrument design, signal processing methods, and geophysical retrieval algorithm development. There has also been a marked increase in engagement by science and applications communities to use GNSS-R data products. IGARSS has both responded to and often proactively driven this growth by supporting technical sessions focused on various GNSS-R methods and applications. This proposal requests a Community Contributed Session focused on current and pending spaceborne GNSS-R missions. It is intended to complement the more technology- and algorithm-focused GNSS-R IGARSS sessions and should be of interest to GNSS-R remote sensing specialists as well as members of user communities who may have an interest in working with GNSS-R data products. 
Show/Hide Description
CCS.94: Calibration and Validation for new spaceborne imaging spectroscopy missions
S/M.4: Mission, Sensors and Calibration — Spaceborne Hyperspectral Missions
One of the fundamental underpinning activities for Earth Observation (EO) is the calibration and validation of the EO Sensors. Calibration and validation determine the quality and integrity of the data provided by the EO sensors and have enormous downstream impacts on the accuracy and reliability of EO products generated from the sensor. Because of its importance, a theme on this subject was initiated as part of the IEEE GRSS GSIS (Geoscience Spaceborne Imaging Spectroscopy) Technical Committee. 
Calibration is the process of quantitatively defining a system’s responses to known, controlled signal inputs and validation is the process of assessing, by independent means, the quality of the data products derived from those system outputs. As is the case with other EO sensors, the calibration and validation of spaceborne imaging spectroscopy sensors provide fundamental underpinning activities. They play a crucial role in assuring those whose work relies on data provided by spaceborne imaging spectroscopy sensors that the measurements are sound and trustworthy.
Vicarious calibration refers to techniques that make use of natural or artificial sites on the surface of the Earth for post-launch adjustment of spaceborne imaging spectroscopy sensors. The sites used for vicarious calibration are surveyed in near-coincident fashion by the device to be calibrated and by one or more well-tuned and traceable sensors that collect equivalent measurements. These reference sensors are usually ground instruments but can also include spaceborne and airborne equipment. A number of vicarious calibration sites have been established in the last few decades, and some have been used on a long-term basis, with permanent instrumentation established as part of the site. The Radiometric Calibration Network (RadCalNet) is an initiative of the Working Group on Calibration and Validation of the Committee on Earth Observation Satellites and provides satellite operators with SI-traceable Top-of-Atmosphere (TOA) spectrally-resolved reflectances to aid in the post-launch radiometric calibration and validation of optical imaging sensor data. The free and open access service provides a continuously updated archive of TOA reflectances derived over a network of sites, with associated uncertainties.  Several contributors will show the evaluation of radiometric performance using RadCalNet data.
The scope of this session is to provide the imaging spectroscopy community with an overview of calibration and validation activities for current and future spaceborne imaging spectroscopy missions for terrestrial and aquatic applications. Besides discussions on new cal/val concepts and strategies, the session will also provide a forum for potential cross-calibration activities and data standard definitions where possible data products and metadata standards for all data levels may be discussed. 
Show/Hide Description
CCS.95: PRISMA Hyperspectral Data Exploitation
S/M.4: Mission, Sensors and Calibration — Spaceborne Hyperspectral Missions
Aim of this session is to show the main goals achieved by the PRISMA user communities in many scientific and applicative domains like calibration, validation, simulation exploitation of data acquired by the Italian Hyperspectral mission PRISMA (launched in 2019). 
In general, one of the most important objective of the Italian Space Agency (ASI) in the Earth Observation field, is to foster the use of the Italian space assets in a space or non-space environment through the development of product services or applications that may have a positive impact in the Italian and European Institutional, scientific and commercial communities. Thus, in terms of PRISMA data exploitation, aim of ASI is to allow the development of new competences in the hyperspectral image processing and consequently promote the development of novel products and services to address traditional market needs.
In this framework, several initiatives have been carried out to support the development of algorithms and methods as well as products and services based on PRISMA data, from funding R&D projects to allow the commercial use of the PRISMA data. 
The interest in PRISMA data, both from the scientific and commercial communities, is constantly increasing, in particular outside Italy. Indeed, PRISMA licenses are subscribed mainly by scientists, institutional and commercial users from more than 15 countries around the world, with USA, India and Germany account for more than 25% of the total users. The use of PRISMA data allows to cover very different topics, from precision agriculture to forestry, from air quality monitoring to water ecosystems analysis, from hydrocarbon detection to cultural heritage preservation.
The acquired know-how acquired through these initiatives will not be restricted to the unique use of Prisma data, but will be extended and exploited to the future hyperspectral missions, such as EnMAP, PRISMA-2, FLEX, SHALOM and Copernicus/Sentinel hyperspectral missions. 
This session is intended on one hand to show the state-of-the-art and future challenges of the PRISMA mission and on the other hand to allow the users of PRISMA data to present their proposed solutions and their involvement with the Commercial/Institutional/Scientific communities and show their results.

Show/Hide Description
CCS.96: Scientific Applications for New Space Hyperspectral Missions
S/M.4: Mission, Sensors and Calibration — Spaceborne Hyperspectral Missions
Recent years have seen a rapid increase in the number of space-based hyperspectral missions in orbit and preparing to launch. These missions are opening the door for widespread, frequent collections of global hyperspectral data never before possible. Spanning the public and commercial domains, these missions will extend technical and scientific capabilities demonstrated through airborne programs and early spaced-based HSI sensors. While a large research community has developed a diversity of algorithms for terrestrial, aquatic, and atmospheric applications, these missions represent the opportunity to truly scale and operationalize Earth observation imaging spectroscopy. In this session, we will hear from researchers working with space-based hyperspectral imagery across the public and private sectors with a focus on early results from newly launched missions, such as PRISMA, EnMAP, and EMIT, as well as preparatory science for upcoming public and commercial missions. We expect the presentations to cover a range of applications, including greenhouse gas detection, biodiversity, water quality, agriculture, and more. Special emphasis will be placed on how we can accelerate scientific validation for yet-to-be-launched missions using pre-launch data (e.g., airborne, simulated, etc.).
Show/Hide Description
CCS.97: State-of-art technology and application of new spaceborne imaging spectroscopy missions
S/M.4: Mission, Sensors and Calibration — Spaceborne Hyperspectral Missions
Nowadays, new operating missions are the US American / German DESIS mission launched in 2018, the Japanese HISUI mission launched in 2019, both on ISS, the Chinese GaoFeng-5 mission and the Italian PRISMA mission launched in 2019, the German EnMAP and the US mission launched in 2022 delivering imaging spectroscopy data from their own orbiting platform. One of the most successful and still operating imaging spectroscopy missions is the ESA mission CHRIS on the PROBA platform.  
Driven by a strong demand of the applied sciences and service developers, further fully operational missions are in planning with the goal to deliver imaging spectroscopy data on a regular, operational and global basis. This data should be available for the scientific as well as the commercial community. We anticipate global coverage at regular intervals by NASA’s Surface Biology and Geology Investigation mission (SBG) and the ESA’s Copernicus Hyperspectral Imaging Mission (CHIME) mission. The goal of CHIME is to provide routine hyperspectral observations through the Copernicus Programme in support of EU-and related policies for the management of natural resources, assets and benefits.
The past and currently operating mission are giving valuable insides and experiences for data acquisition planning, processing and provision as well as for the development of new technology necessary to explore the full potential of spaceborne imaging spectroscopy data. Moreover, a lot of effort is put into calibration and validation of the data in order to deliver high-quality data that has the potential for multi-sensorial analyses. 
This session is a part of the IEEE GRSS Geoscience Spaceborne Imaging Spectroscopy (GSIS) TC. The scope of this session is to provide the remote sensing community with an overview of the status of current spaceborne imaging spectroscopy missions for “state-of-art” technology and applications of terrestrial and aquatic applications (GHG mapping, minerals so far unseen from space, water quality ...). Focus will be set towards the above recent launched missions and related activities and perspectives to future imaging spectroscopy mission. 
Show/Hide Description
CCS.98: ALOS Series Mission, Cal/Val, and Applications (Part 1) ALOS Series Mission, Cal/Val, and Applications (Part 2)
S/M.7: Mission, Sensors and Calibration — New Space Missions
A series of Advanced Land Observing Satellite (ALOS) by the Japan Aerospace Exploration Agency (JAXA) is continuously operated since 2006, and currently observes by ALOS-2 for precise observations. ALOS series consisted of missions by the high-resolution optical and L-band Synthetic Aperture Radars (SARs) named the Phased Array type L-band SAR (PALSAR). An L-band SAR mission is taken over by ALOS-2 and will be followed by ALOS-4 which is planned to be launched soon. The optical mission is followed by ALOS-3 which is scheduled to be launched this fiscal year. ALOS-3 is carried on the wide-swath and high-resolution optical imager that consists of 0.8 m ground sampling distance (GSD) of the panchromatic band and 3.2 m GSD of six bands with 70 km of the observation swath width at the nadir. The primary mission objectives of the ALOS series are to contribute the disaster monitoring and prevention, national land and infrastructure information updates, and global forest and environmental monitoring which are major research and application themes in geoscience and remote sensing fields. Also, the follow-on missions of ALOS-3 optical and ALOS-4 SAR are being discussed by the Japanese government, respectively. 
In this Community Contributed Session, what should be focused on the future ALOS series missions, Cal/Val, science, and applications will be discussed. The summaries of achievements by ALOS and ALOS-2 will be introduced and planned to be conducted by ALOS-3 and -4. The perspectives of the future ALOS series mission will be discussed based on these results, which may cover the importance of mission continuity, international collaborations, advantages, and disadvantages that should be reflected in future missions.
Show/Hide Description
CCS.99: Commercial Space with a Public Benefit: The Evolving Role of New Space Missions
S/M.7: Mission, Sensors and Calibration — New Space Missions
According to the National Academies of Science, Engineering, and Medicine, the growing “New Space ecosystem” is poised to transform the diversity and breadth of Earth observation missions. New Space missions represent a novel opportunity for public-private partnerships and technology transfers which support agile aerospace development. When paired with a public benefit, these new space missions have potential impact well beyond the traditional users of remote sensing data. Broadening the user population unlocks more informed policy decision making and the involvement of communities in sustainable development. This session will highlight ongoing partnerships in New Space missions with emphasis on how these accelerate and expand opportunities for collecting vital EO data, supporting scientific applications across academia, industry, and government, and delivering the benefits of the data set to a more diverse and distributed set of users.
Show/Hide Description
CCS.100: Future space missions on the thermal infrared radiometry of the Earth at high spatio-temporal resolution
S/M.7: Mission, Sensors and Calibration — New Space Missions
Energy transfer and exchanges of water and carbon fluxes in the soil–vegetation–atmosphere system need to be well described to enhance the role of environmental biophysics. Climate indicators include notably the frequency of water stress, the enhancement of urban heat island, more floodand fire, expansion of drought periods, sea rise, ocean heat, retreat of glaciers and the Arctic and Antarctic sea ice extent and shrinking ice sheet. Those phenomena represent fundamental issues for the future environment of our planet. Most of them can be well depicted provided that realistic information is obtained, which can be reached from a list of Essential Climate Variables (ECV) (Global Climate Observing System/GCOS) among which the land surface temperature (LST), the land surface emissivity (LSE) and the evapotranspiration. LST is defined as the radiative skin temperature and is broadly considered in agriculture (plant growth, water stress, crop yield, precision farming, early warning, freezing scenarios), in hydrology (water cycle, catchment water, etc), and in meteorology. LSE partitions the surface attributes (vegetation, soil, snow, water, rock, manmade material) that shape the landscape. LST is a proxy for surface energy budget, urban heat island, and mixing processes over the coastline and shoreline. A new generation of spaceborne platforms will measure the Thermal InfraRed (TIR) radiometry at a high spatial resolution - between 30 m and 100 m typically - with a frequent revisit, that is several times per week, and at noon-night time also. The precursor is ECOSTRESS ( ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station ) flying today aboard the International Space Station (ISS), followed by TRISHNA (Thermal infraRed Imaging Satellite for High-resolution Natural Assessment), SBG (Surface Biology Geology) and LSTM (Land Surface Temperature Monitoring). A design driver of these foreseen space missions is water use in agriculture which represents 70 % of global resources, making sustainable irrigation a key issue. Automatic detection and mapping of irrigated farmland area is vital for many services in charge of water management. In that respect, refined TIR signal will bring new insights, additionally to visible and near infrared observations, on irrigated areas that display lowest LST values at the peak of growth. Indeed, the global change imposes an implementation of more efficient irrigation practices at the scale of an agricultural plot for better control. The decrease of moisture within the soil after water supply can be evaluated from the surface moisture estimated by radar but TIR observations remain better-suited to monitor vegetation water stress and irrigation at the agricultural plot to adapt the proper needs for individual crops. This is why it is important to develop new satellite observation systems in TIR range that conciliate spatial resolution and high revisit capabilities. This special session will review the characteristics of all future TIR space missions and how they can be developed in collaboration regarding the envisaged products and the Cal/Val strategies.
Show/Hide Description
CCS.101: NewSpace SAR Instruments
S/M.7: Mission, Sensors and Calibration — New Space Missions
The community contributed session “NewSpace SAR Instruments” is organized by the GRSS-IFT technical committee with the intention to provide the audience with an overview of the recent activities in the field of NewSpace synthetic aperture radar (SAR) with particular attention to the instrument aspects. 
The term NewSpace refers to the pursuit of common, nongovernmental market goals bounded primarily by market forces, executing activities in an entrepreneurial way. Numerous commercial actors have entered the NewSpace business, raising funds to launch constellations of small satellites (smallsats) for connectivity and remote sensing, including SAR. 
NewSpace SAR instruments have been launched by a number of private ventures over the last few years. The proposed solutions show a high degree of innovation in technology, processes, and mission operations. The instruments are much cheaper than in conventional SAR missions and still allow for good imaging performance. 
The first NewSpace SAR system was launched by the Finnish company ICEYE in 2018. After ICEYE, further companies, among which SSTL (UK), iQPS (Japan), Capella Space (USA), Synspective (Japan), and Spacety (China), also launched NewSpace SAR satellites. The total funds raised for NewSpace SAR satellites are already in the order of a billion U.S. dollars and destined to further increase with a significant impact on the Earth Observation data market.
The aforementioned cost reduction has been achieved through three main drivers: 
- An innovation in the instrument design;
- The use of Commercial Off-the-Shelf (COTS) components;
- A new instrument development and integration approach.
NewSpace SAR missions mostly aim at high or very-high, sub-meter resolutions over narrow swaths/specific areas, therefore exploiting the stripmap and/or spotlight SAR acquisition modes. While the very limited orbital duty cycle does not make those missions suitable for continuous global coverage as conventional SAR satellites (e.g., Sentinel-1). The availability of constellations, however, offers incredible opportunities for rapid access and persistent monitoring over selected areas.
Current, planned, and future NewSpace SAR systems represent a huge opportunity for Earth Observation. They should not be seen only a replacement of traditional SAR systems, but also and especially as a complement that can enormously enhance the capabilities of SAR remote sensing.
The community contributed session “NewSpace SAR Instruments” is therefore of great importance to Geoscience and Remote Sensing and its IEEE Society, as it represents an opportunity of exchange and discussion on how concepts, technologies, and synergies can favor a persistent acquisition of high-quality data, which are indispensable to monitor our planet and combat climate change.

Organizers:
Michelangelo Villano, German Aerospace Center (DLR), Germany
José Márquez-Martínez, Radarmetrics, S.L., Spain
Delwyn Moller, University of Auckland, New Zealand
Marwan Younis, German Aerospace Center (DLR), Germany
Show/Hide Description
CCS.102: Next Generation of LEO/GEO Microwave and Infrared Sounders
S/M.7: Mission, Sensors and Calibration — New Space Missions
The Community Contributed Session, “Next Generation of Low-Earth Orbit (LEO) and Geostationary (GEO) Microwave and Infrared Sounders”, would be the fourth in a series of successful IGARSS invited sessions on the topic of next generation sounders, since IGARSS 2020. A total of 22 invited talks have been given during these invited sessions. Due to the volume of presentations submitted, 2 Invited sessions were organized in IGARSS 2022. A broad range of topics have been covered, including lessons learned from previous and current sounders; U.S. plans for geostationary sounders; proposed requirements for future LEO and GEO sounders; details of CubeSat sounders such as NASA Tropics, CHISI, TEMPEST-D, CIRAS, and BOWIE-M; and other proposed sounders including GeoSTAR, HIIS and a proposed GEO ring of hyperspectral infrared sounders. The introductory talks have discussed the topics of the next generation of US LEO hyperspectral sounders and a future NOAA LEO constellation, lessons learned from current sounders, and a new approach to evaluating and optimizing the value of observing systems with applications to the next generation of NOAA satellites [1-4]. This newest session aims at discussing the latest progress and exploring new possibilities behind the development of future microwave and infrared sounders onboard LEO and GEO satellites as well as CubeSats [1-4]. 
In this session, the status of new missions, new concepts and system architecture, the progress and challenges of LEO/GEO microwave and infrared sounders as well as strategic international collaborations will be discussed. Such sounders include the Meteosat Third Generation (MTG) Infrared Sounder and EUMETSAT-Second Generation Infrared Atmospheric Sounding Interferometer – New Generation (IASI-NG) and Microwave Sounder (MWS), the NASA TROPICS Mission as well as the planned GEO-XO and LEO SounderSat.
In the past decades, there has been much effort and progress toward advancing microwave and infrared sounders on LEO and GEO satellites. Those instruments have been providing high quality Earth observations. Those observations are pivotal for weather and climate applications and are being assimilated on a daily basis at Numerical Weather Prediction centers all around the world.  Observations from LEO and GEO microwave and infrared sounders have shown their capability to support the generation of accurate global temperature, water vapor profiles, and trace gas column abundances. Infrared sounder observations provide high vertical resolution capabilities, while microwave sounder observations enable all-weather condition capabilities. A key component for further observational improvements is through increased temporal and spatial resolution, which can be realized with a combination of infrared and microwave sounders on LEO and GEO satellites [1-4].
[1] https://igarss2020.org/view_session.php?SessionID=1026
[2] https://igarss2021.com/view_session.php?SessionID=1332
[3] https://igarss2022.org/view_session.php?SessionID=1282
[4] https://igarss2022.org/view_session.php?SessionID=1283
Show/Hide Description
CCS.103: Optimization of the Global Next-Gen Space Architecture to Support Earth Science Applications
S/M.7: Mission, Sensors and Calibration — New Space Missions
In the past the evolution of the Earth-observing satellite constellation (EOSC) was gradual and technology driven, there are now several driving factors that add complexity to the design of the next-generation EOSC (Boukabara et al. 2021). Besides the rapid advances in sensors technology in the recent past, the following are few examples of factors that need to be accounted for when formulating the space architecture of the future: (1) the expected revolution of medium to very small size satellites aiming at providing similar or slightly degraded performances for a fraction of the cost of traditional platforms, (2) the large array of applications, all vying for even more and better environmental data, (3) the multiplication of methods and technologies that can measure similar environmental information with varying degrees of accuracy and resolution, and the emergence (4) of the private sector as a viable source for providing environmental data and (5) of new space-faring Nations with ambitious space programs.

This Community Contributed Session aims to serve as focus for discussion of these issues. Including:
• How to best make use of available resources to choose observing systems to invest in?
• How to leverage contributions from multiple partner organizations?
• How to include ground-based systems in the decision-making process for space-based systems, and vice versa?
All this in the context of optimizing the next-gen space architecture for good of all consumers of earth system data and forecasts. 

Boukabara, S.-A., J. Eyre, R. A. Anthes, K. Holmlund, K. St. Germain, and R. N. Hoffman, The Earth-Observing Satellite Constellation, a complex, inter-connected global system with extensive applications: A review from a meteorological perspective, IEEE Geoscience and Remote Sensing Magazine, 9(3), 26-42, 2021, https://doi.org/10.1109/MGRS.2021.3070248.
Show/Hide Description
CCS.104: The age of AI and edge computing in space has come
S/M.7: Mission, Sensors and Calibration — New Space Missions
In the last years, the significant improvements in Artificial Intelligence (AI), embedded hardware,  edge computing and cloud computing and the higher availability of data have paved the way to investigate Cloud Computing in Space (3CS). 
By moving computing capabilities and processing information directly onboard spacecraft, 3CS has the potential to realize a more efficient, agile, autonomous and reconfigurable space infrastructure. Indeed, by combining the potentials of cloud and edge computing, 3CS enables the extraction of actionable information, which can be delivered to the end user with reduced latency. Because of that, numerous researchers are now investigating the benefits of 3CS for numerous Earth Observation (EO) applications, including the early detection of natural disasters, vessels, gas leaks and others. In addition, this also allows for discarding corrupted data, such as satellite cloud-covered, blurred images, or data with limited information content in general, reducing the requirements in terms of downlink bandwidth.
Nevertheless, research on 3CS is a broad research topic and is at its primary stage. Because of that, applications, real benefits, technologies, and algorithms for 3CS shall be still explored and investigated in detail.
Because of that, this session will review new and challenging mission concepts in EO and beyond that push the limits of onboard computing by capitalizing on the latest developments in AI and hybrid computing technologies and paradigms. 

Topics include:
-	EO applications and tasks leveraging the 3CS concept
-	Recent developments in Artificial Intelligence, edge computing and cloud computing both for science-driven and application-driven missions
-	Onboard processing for rapid response and environmental awareness
-	Real-time integration of Internet of Things (IoT) and Earth observation information
-	Swarm intelligence
-	Orbiting smart data centres
-	Synergetic use of emerging hybrid computing paradigms (e.g., cloud and neuromorphic)
-             Distributed and federated learning for onboard spacecraft applications
-             Blockchain for onboard spacecraft applications
Show/Hide Description
CCS.105: UAV/mobile-mapping SAR systems and applications
S/M.8: Mission, Sensors and Calibration — UAV and Airborne Platforms
SAR systems on UAV and other mobile mapping platforms, such as cars, have increasingly gained attention also within the geoscience community. Small SAR systems deployed on such platforms offer complementary properties with respect to the revisit time, operational flexibility, and observation capabilities as compared to spaceborne and conventional airborne SAR systems. On the other hand, compared to stationary terrestrial radar/SAR systems, the increased synthetic aperture size of UAV/mobile mapping SAR systems allows to obtain a higher spatial cross-range resolution also for quasi-terrestrial observation geometries. 

These complementary properties of UAV/mobile mapping SAR systems open a large field of potential applications, some of which are addressed within the scope of this session including high-resolution DInSAR based measurements of surface displacements, monitoring of vegetation / agricultural crop,  change detection, tomography, and bistatic and multi-static system demonstrations.
From a system point of view, these agile SAR platforms require not only new compact SAR system designs, but also compact and innovative high-performance navigation using smaller INS/GNSS systems, in some cases combined with vision systems, as well as adequate SAR imaging algorithms and DInSAR processing chains adapted to the potentially non-linear sensor trajectories and partial aperture synthesis common to UAV/mobile mapping SAR systems and application.

This invited session aims at giving an insight into recent state-of-the-art UAV/mobile mapping based SAR systems and applications developed with a focus on geoscience applications. 
After our successful invited sessions around this topic during IGARSS 2021 and IGARSS 2022 we would like to keep track of this topic providing insight into the latest technological developments with small SAR systems on UAV/mobile-mapping platforms.
The session covers a number of novel systems and UAV/mobile mapping platforms of different size, type (fixed-wing and VTOL UAVs, cars), and a range of applications such as repeat-pass differential SAR interferometry for displacement measurements, change detection, and tomographic configurations.
 
Compared to the previous 2 years we could even increase the number of invited talks by a factor of 2 to now 10 invited/community contributed talks (with even a few spare talks that we had to decline for the session and that will hopefully be submitted to the regular sessions). 

We believe that our session topic "UAV/mobile mapping based SAR systems and applications" is of very high interest for the geoscience and remote sensing community already now and that both, interest and impact will be even increasing in the future.

We are covering aspects such :
SAR systems, UAV-borne SAR, mobile mapping SAR systems, UAV, UAS, drones, mobile mapping, SAR imaging, SAR interferometry, SAR tomography, small SAR systems, miniature SAR systems, INS/GNSS navigation, light-weight SAR systems, FMCW radar, wide-bandwith radars

Show/Hide Description
CCS.106: Terrestrial Radar/SAR Systems and Applications
S/M.9: Mission, Sensors and Calibration — Ground based Systems
During the last decade, SAR systems technology has reached its maturity as a remote sensing tool for Earth observation, especially when considering orbital platforms. SAR technology has contributed to different societal challenges as climate change, food security or natural and anthropogenic hazards monitoring. Despite orbital systems have and will continues to have a prominent role in the future, terrestrial systems have also demonstrated their usefulness, in particular for local monitoring or for time critical applications, where orbital system present several limitations nowadays, in particular for those applications needing revisit times in the order of minutes, hours or several days. 

Beyond the clear scientific interest of terrestrial systems, ground based radar and SAR systems have also awaken an important interest in private companies due to the appearance of commercial applications specially in the fields of subsidence monitoring, monitoring of vegetation / agricultural crop, change detection etc. These agile radar and SAR platforms require not only new compact SAR system designs, but also high-performance navigation using compact navigation systems and in some cases they are combined with vision systems, as well as adequate SAR imaging algorithms and DInSAR processing chains adapted to particular imaging geometries.

The aim of this section is to present both contributions from research institutions, as well as from private companies to show the current state-of-the-art of the of mapping based on terrestrial radar and SAR systems and applications developed, with a focus on geoscience applications.

This session is complementary and connected to the session entitled “New UAV/mobile-mapping SAR systems and applications”, presented by Othmar Frey, as terrestrial radar and SAR systems share many aspect with UAV SAR systems. 
Show/Hide Description
CCS.107: Recent Advances on Polarimetric GNSS-R
S/I.11: Sensors, Instruments and Calibration — Sensors Using Signals of Opportunity (e.g. GNSS-R)
Polarimetric GNSS-R Global Navigation Satellite System – Reflectometry (GNSS-R) is the next step in GNSS-R to enable better land and cryosphere retrievals. The role of polarimetry on forward scattering measurements is key to vegetation, soil moisture and roughness untangling, sea ice studies, and freeze/thaw state investigations. As an example, single-pass soil moisture retrieval from a single instrument and without using much ancillary data has been one key objective of GNSS-R for the last decade. Achieving this goal will allow small satellites with GNSS-R payloads to perform independent land monitoring without the need for additional dynamic measurements. However, several studies have already pointed out that single-pass soil moisture without in-situ moisture or ancillary data is rather complex and presents a very low accuracy. One way to overcome this problem is by adding polarimetric capabilities to current GNSS-R instruments. This would allow the study of the Earth surface, notably the one exhibiting polarimetric signature, as the ice, bare soil, or vegetated areas, with more degrees of freedom than current GNSS-R receivers.

Current trends for Polarimetric GNSS-R are on the use of either two circular polarization antennas (RHCP/LHCP) or two linearly polarized antennas (H/V). Discussions among experts in the field are required to further advance and provide better guidelines on where one shall be used in front of the other. From this, we expect fruitful discussions among peers from different teams, and backgrounds. We will discuss the effects of polarimetric GNSS-R from modeling, data simulations for future missions based on actual non-polarimetric missions, and actual polarimetric data analysis.

This session will be of high relevance for upcoming GNSS-R missions as HydroGNSS, and also for future, yet unplanned, missions that require polarimetric capabilities to provide better quality products from GNSS-R, such as soil moisture or cryosphere characterization, or to provide other products, as vegetation opacity. Moreover, the use of polarimetric GNSS-R retrievals may allow the retrieval of some geophysical parameters, e.g., soil moisture, independently, e.g., without using ancillary data. Hence, providing a complementary dataset to currently available missions that will serve as training or validation for future algorithms.

Models combined with measurements from the current GNSS-R missions (e.g., CYGNSS, BuFeng-1, FY-3E, Spire, SMAP reflectometry) can be used to better understand what can be expected from a polarimetric GNSS-R mission. This session will include talks of the different teams around the world that are engaging into polarimetric GNSS-R: model development efforts, current available measurements from space, and the future planned polarimetric GNSS-R mission.
Show/Hide Description
CCS.108: LiDAR point clouds intelligent processing
S/I.12: Sensors, Instruments and Calibration — Lidar Sensors
LiDAR point clouds intelligent processing has been achieving enormous success in the fields of computer vision and showing state-of-the-art results. 3D LiDAR point clouds processing is being considered as a key technology in real 3D, geographic big data, intelligent cities and autonomous driving, etc. Most existing LiDAR objects intelligent processing e.g. registration, detection and segmentation are designed for traditional methods. However, our understanding of object processing has in part been restricted by the difficulty of obtaining high-quality point clouds in large scale scene. This session focuses on LiDAR object classification, detection and segmentation approaches for intelligent point cloud processing. The purpose is to collect latest development in the area of point cloud processing using deep learning or novel institutional approaches, and to increase interdisciplinary interaction and collaboration among photogrammetry, mapping, computer vision, mathematics and computer graphics. Remote sensing using LiDAR sensors has provided a simple and cost-effective method to obtain such measurements for object detection and segmentation. Point clouds acquired by LiDAR sensors have become a well stabilized data source for characterizing object structure and estimating scene semantic information. 

This Chair session invites prospective authors to submit papers that address challenges within the field of LiDAR point clouds intelligent processing.

Topics To Be Covered

The chair session seeks contributions that may address, but are not limited, to the following topics:
•	Point clouds for 3D matching and registration
•	Point clouds for 3D objects classification
•	Point clouds for 3D objects segmentation
•	LiDAR SLAM
•	Sequential point cloud detection and segmentation
•	Point clouds and image merged objects intelligent processing
•	Machine learning algorithms based on LiDAR
•	Deep learning algorithms based on LiDAR
•	Urban objects detection and segmentation
•	Traffic objects detection and segmentation
•	Forest LiDAR applications
•	Indoor scenes parsing
•	LiDAR-derived metrics for quantifying intelligent processing
•	Novel case studies in object inventory and management
•	Real 3D visualization
Show/Hide Description
CCS.109: Quantum Rydberg Radars
S/I.15: Sensors, Instruments and Calibration — Advanced Future Instrument Concepts
Quantum Rydberg Radars (QRR) builds on the disruptive and proven Quantum Rydberg atomic receiver technology. QRR integrates state-of-art high sensitivity, low-noise, ultra-broadband, quantum down-conversion – with no standard/classical antenna/RF front-end/mixers, and a compact detector, which makes it an improvement over the classical radar. Configured in a small form factor instrument, the QRR can be dynamically tuned to any band in the radio window, resulting in broad-spectrum, multi-science, radar remote sensing applicability. Configured as a signal-of-opportunity receiver it can focus on dynamics and transients using external satellite signals. QRR can be configured for multiple radar measurements to include vertical profiling, SAR (synthetic aperture radar), or tomography, covering numerous bands across the radio window. The goal of this session is to bring together multi-disciplinary expertise from the radar science to systems, algorithms, optics and quantum optics, and quantum systems. This session is important to Geoscience and Remote Sensing as it explores the upcoming quantum-based radar remote sensing systems of the future for Earth sciences.
Show/Hide Description
CCS.110: Advances in Passive Microwave and Millimeter-wave Technology
S/I.18: Sensors, Instruments and Calibration — Microwave Radiometer Instruments
This session will explore recent advances in the technologies that will shape the future of microwave and millimeter-wave instruments. In addition to discussion to recent component and instrument incubator concepts, we will also hear about novel methods of calibration from around the world. These novel solutions will further the development of stable, sensitive and high resolution radiometers and spectrometers operating from microwave frequencies to 1 THz. These are essential to address challenges in the current state of the art passive microwave remote sensing. While providing reduction in size, weight and power (SWaP), it will improve the sensitivity, calibration and resolution of remote-sensing systems.
These are important to the future of Geoscience and Remote Sensing and provide the community an opportunity to collaborate and develop new measurement concepts for future Earth Observing, Astrophysics and Planetary Science Missions.
Show/Hide Description
CCS.111: THz Range, Ultra-Low-Noise, Ultra-Wideband Receiver Technologies for Remote Sensing
S/I.18: Sensors, Instruments and Calibration — Microwave Radiometer Instruments
Recent advancements in high-frequency, low-noise, ultra-wideband THz range receiver technology have made those receivers available for passive remote sensing applications. Especially, recent advancements in InP HEMT low-noise amplifier technology combined with the novel high-frequency receiver calibration techniques have substantially improved the accuracy and resolution of the remote sensing instruments. The miniaturized size, reduced weight and low-power of these THz range receivers have made available in the CubeSat receivers.

This session will be focusing on the recent advancements in the THz range receiver technology including low-noise, ultra-wideband design techniques including InP LNAs, wideband high-frequency RF hybrids and filters, ultra-wideband ASICs, advanced THz range receiver calibration topologies. The session is expected to be at high-interest of remote sensing community since these technological advancements are currently pioneer a lot of new remote sensing capabilities resulting in new science studies.
Show/Hide Description
CCS.112: Recent Advances in Synthetic Apertures for Geosciences and Remote Sensing
S/I.20: Sensors, Instruments and Calibration — Onboard Signal Processing
The widespread use of synthetic apertures (SAs) in signal acquisition and reconstruction—prevalent across different geophysical domains, from the depths of oceans to the upper reaches of the earth’s atmosphere—has resulted in unprecedented information gathering capabilities. Furthermore, tremendous advances have been made over the past several decades in computational capabilities that can enable critical inferences about the type and quality of physical structures constituting the scene of interest. The resulting inferences are of potentially vital importance for devising solutions to problems in diverse applications ranging from maritime surveillance for national defense to environmental monitoring for agriculture, water resource management, and global warming.

This special session is devoted to highlighting some of the important recent advances made in the theory and practice of SA in geosciences and remote sensing. The sensing modalities of interest encompass both electromagnetic (EM) and acoustic spectra. The special session is organized by chairs and co-chairs of IEEE SA Standards Committee (SASC), created under the aegis of SA Technical Working Group of the IEEE Signal Processing Society. The SASC is dedicated to harmonizing the best practices for SA processing and interpretation.

An important problem in this topic area is the formation of distributed SAs for both incoherent and coherent information fusion. Distributed sensing structures have several potential advantages over monostatic systems. Specifically, distributed systems leverage a great degree of angular and spectral diversity when probing a scene or environment which in turn yields rich feature sets for analyzing specific target structures of interest. Furthermore distributed systems, depending on specific applications, can be more versatile, adaptable and agile compared monostatic counterparts owing to inherent redundancy built into the system as a result of many degrees of freedom. Harnessing these potential advantages of distributed systems, however, requires many fundamental system level and signal processing problems o be resolved including synchronization and communication links between sensors, and devising computationally efficient processing strategies between various nodes to allow for globally optimum inferences.

Weather radars are important tools for measuring precipitation—both type (rain, snow etc.) and quantitative characteristics (including motion and intensity). Apart from near term benefits of monitoring and predicting weather patterns, the resulting measurements serve as valuable data for updating climate models. The open questions in weather radar research stem therefore from both processing and modeling considerations. Unlike the use of radar for surveillance applications where the targets of interest are localized, the objects of interest in weather radar are distributed in nature which nevertheless have a correlated statistical structure that can be exploited in making efficient inference operations. On the other hand the need for accurate weather and climate models puts enormous demands on the accuracy of weather radars including basic issues of calibration.

The above serve as two illustrative examples among a potentially a wide array of topics encompassed in this special session including various aspects of SA sonar, radiometry, ground penetrating radar based sensing and processing, wireless power transfer, and SA based geolocation. Papers are solicited that make innovative, fundamental and/or timely contributions within this topic area.
Show/Hide Description
CCS.113: Nano and Picosatellites as Tools for Education in Remote Sensing: The 2nd GRSS Grand Challenge and other Educational Activities based in CubeSats
D/E.3: Education and Policy — Education and Remote Sensing
It is said that New Space has democratized the access to space, and it is true that many companies are benefiting from it to do business, both for communications, and for Earth Observation.
Two decades ago, CubeSat-based projects started as tools for education in aerospace systems engineering. However, the vast majority of the subsystems fall in the telecommunications and electronics engineering disciplines, so these are outstanding capstone projects in these disciplines. 
If remote sensing payloads are added, we have then an outstanding opportunity to engage students in the field, showing the whole system complexity, and training them to work as a team, handling documentation properly, and handing over information as students graduate and new ones come in. 
In this session, we would like to know the CubeSats developed by student teams from different countries, and learn from their experience.
- To start with, we would like to invite the teams from the AlainSat-1 project, a 3U CubeSat developed by the NSSTC of the UAE, who brings in the platform, launch and operations, and three student teams from Spain, Indonesia, and Japan, contributing with 3 different payloads.
- Because of geographical proximity, the potential contributors listed below are mostly Europeans, but the session shoud be truly internatonally open.
- Slot permitting, the IEEE GRSS OpenPocketQube Project can also be presented.
Show/Hide Description
CCS.114: Women in SAR: Excellence, diversity and challenges
D/E.3: Education and Policy — Education and Remote Sensing
Women in SAR conduct innovative science in SAR applications development, evolving SAR engineering systems and advancing SAR technology. There is a large, diverse group of women working on these activities from new graduate students to senior researchers & engineers, all who contribute to the advancement of SAR science & technology. Their unique perspectives and science excellence are critical to the ongoing evolution of geoscience and remote sensing. In this session we propose amplifying the outstanding science contributions of Women SAR scientists while exposing the challenges they face in their everyday lives and careers in order to conduct that science. By amplifying the voices of women and underrepresented communities, promoting them and showcasing their work, we strive to make sure no one feels invisible, and that we all continue to work toward a representative and inclusive field of SAR remote sensing.

The Sisters of SAR is a voluntary initiative that officially launched on social media in April 2020, with currently over 8500 followers. Created and organized by women in SAR, we promote the exceptional advancements in SAR research and engineering around the world while showcasing the accomplishments of other women in our field. In a field historically dominated by men, we help make sure that the voices and work of women are heard.
Show/Hide Description
CCS.115: Open Innovation for Earth Observation Science and Applications
D/S.4: Societal Engagement and Impacts — Citizen and Open Science
This session co-chaired by NASA and ESA gives a Space Agencies perspective on Open Innovation in Earth Observation and Earth Science, highlighting recent achievements and promoting international cooperation. 
Based on core principles that include accessibility, reproducibility, inclusiveness, and transparency, Open Science enabled by technology creates the premises for better and faster scientific research results as well as higher trust in scientific outputs and methodologies. 
The golden standard in Open Science assumes openness is achieved from data collection throughout the entire scientific workflow and practice, to publication of open access findings accompanied by open data and open source code. Currently, in Geoscience and Remote Sensing, Open Science is found at various degrees of maturity as well as applicability. 
Addressing global challenges and answering complex research questions about the Earth System rely on scientific communities working across disciplinary and institutional boundaries, supported by effective access to inter-agency Earth Observation Science data, knowledge, and computing infrastructures. A wider adoption of Open Science practices can bring enormous value by offering interoperable systems working across domains with heterogeneous data that “make the scientific process as transparent (or open) as possible by making all elements of a claimed discovery readily accessible, enabling results to be repeated and validated”. 
In this session we address current advances achieved in Earth Science and Earth Observation (e.g. provision of Open EO Data respecting FAIR principles, Open Standards and Infrastructure, Open Access, Open Education etc.) as well as what are remaining challenges and open questions (e.g.,  licensing, long term sustainability, etc). 
With Open Innovation we go beyond Open Science, towards a complete process for innovation that relies on collaboration, and facilitates the transfer of ideas and information between organisations, connecting people and infrastructure, leading to new business models. It can help accelerate the translation of the scientific knowledge and discoveries achieved through Open Science into commercially viable value-added products with impact for society. 
New dedicated Open Innovation initiatives are emerging, enabled by technology advancements (e.g., big geospatial data collection, transmission, analysis, processing and dissemination, geospatial standards, etc.) and supported by policies and new practices adopted by Space Agencies. As it becomes increasingly possible to adopt agile development approaches to manage space-based information and create new value for society,  Earth Science and Earth Observation are now entering a transformative process. 
Thus, in this session we furthermore present how the adoption of this new open and collaborative innovation process, supported by dedicated programmes and initiatives (e.g., NASA Transform to Open Science, ESA Open Innovation Framework) is fostering new technology development projects across space agencies (e.g., the NASA-ESA-JAXA EO Dashboard, the Open Science Persistent Demonstrator initiative) and the advancement of Earth Science research and Open Education.  
Show/Hide Description
CCS.116: Open Science in Action
D/S.4: Societal Engagement and Impacts — Citizen and Open Science
Advances in data, software, and computing are enabling transformational, interdisciplinary science, changing the realm of possible questions. Open science communities can advance science and inclusivity simultaneously. Practically, many scientists who wish to move towards more openness and reproducibility struggle to understand what resources are available to enable open and reproducible science. This session seeks to connect the community of open data, open source software, open science platforms, open access, and open science practitioners, both technology developers and users across science disciplines.

The oral discussion session (90 min) will showcase open science technology developers, including demonstrations of open source software and technology tools, and using open science data platforms. This format will allow us to have a community discussion of open science practices, highlight useful tools and science, and provide hands-on practical tutorials for scientists.

The second session (30 min) will offer tutorials on popular open-source libraries. This session will build off of the oral presentations, allowing attendees to get more in-depth experience with open-source technologies. The goal of this session is to put knowledge and ideas into action, with a focus on first-time users and first-time contributors. However, all levels of coding and scientific experience will be welcome.

The third session is a cross-disciplinary e-Lightning session (90 min) that brings together all the different parts of open science to cross-pollinate ideas between technology developers and scientists. This session will focus on open science results and use cases, exploring the implementation of open science workflows and bringing together scientists across domains to share open science knowledge. Demonstrations of these workflows and a discussion of pain-points will be useful to both scientists and the technology developers. This session will be followed by a 30 minute discussion.
Show/Hide Description
CCS.117: Advanced Flood Monitoring and Prediction for Disaster Risk Reduction and Resilient Infrastructure
D/S.5: Societal Engagement and Impacts — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
New strategies and solutions based on high-frequency, high-resolution monitoring and assessment of natural disasters are essential  to preserve the environment and build resilient infrastructures. For instance, a comprehensive assessment of past and current natural disasters (e.g. floods) and their associated damages supports an innovative infrastructure planning that is required to build disaster-resilient communities. 2022 has been another year with numerous devastating water-related disasters hitting many regions across the globe. For example, since June 2022, torrential monsoon rains triggered the most severe flooding in Pakistan’s recent history. The flood washed away villages and over 2.1 million people were left homeless or are living in temporary camps. In this context, advanced remote sensing coupled with numerical prediction modelling appears to be the way forward for: (i) addressing water-related disasters in order to reduce damages and save lives, and (ii) proposing innovative solutions that preserve the environment and support developing resilient infrastructures.

Flood assessment and resilient infrastructure management through remote sensing data are important and challenging research topics. Numerous research groups focus on these topics and applications developed in this area are vast. From 2018 to 2022 we have gathered several scientists from these groups in IGARSS invited sessions to facilitate the exchange of knowledge and experience on the flood mapping topic. Moreover, it allowed to strengthen collaborations between the remote sensing and risk management communities interested in remote sensing flood assessment associated with 3-D (i.e., water depth) and 4-D (i.e., spatial-temporal) models. In addition, the intense discussions between the presenters and a large audience were testimony to the fact that there is a high interest within the flood mapping and risk reduction communities to collaborate on the development of solutions enabling the built up of a more disaster-resilient infrastructure at both local and global levels.

The objective of the 2023 IGARSS session is to introduce related research studies focusing on both remote sensing fundamentals and advanced algorithms based on big data and cloud-computing technologies. Emphasis will be on flood disasters and applications in near-real time monitoring and predictions, as well as long-term risk analyses using advanced satellite Earth Observation (EO) data. EO data coupled with innovative scientific solutions allow to address in a precise manner the level of disaster damages on different land classes including: coastal flood mapping, urban flood-area and damage detection, analysis of weather impacts on agricultural lands, or delineation of reference/historical flood zones. Moreover, for rapid recovery activities and resilient infrastructure investment, flood monitoring using EO data in near real-time is an imperative process in the early stage analysis. Rapid flood detection techniques based on multi-sensor EO imagery are one of the main subjects that will be addressed in the IGARSS 2023 session.
Show/Hide Description
CCS.118: Bridging the Gap between Near Real-Time Remote Sensing Data Products and User Needs for Geoscience Applications
D/S.5: Societal Engagement and Impacts — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
In applications of remote sensing in Geoscience, near real-time and low latency data applied to satellite, airborne, marine, and terrestrial sensors are transforming existing end-user applications and spawning new ones. These applications demonstrate the utility of timely data and advanced analyses in diverse Geoscience and remote sensing disciplines. This session seeks contributions that demonstrate the technologies, tools, data products, and applications that reduce the latency of near real-time workflows including machine learning, discuss plans for near real-time decision support system capabilities based on increased leveraging “Big Earth/Space Data”, and identify gaps in current capabilities. To build a bridge between remote sensing data providers and users, we invite actual and potential users of near real-time and low latency data products to share how they are applied to the specific decision in the event timeline. We welcome data providers to demonstrate how they helped data users to access and analyze NASA’s and/or NOAA’s satellite observational datasets.      
In this session, we will focus on the near real-time and low latency data, tools, platforms, and user impacts. Special emphasis will be on what has been done, what can be done and what can be improved for Geoscience and remote sensing applications in the future. This session will build bridges between data providers and users representing government agencies, universities, NGOs and private sectors. 
Show/Hide Description
CCS.119: Earth Observation Technologies for Disaster Risk Management
D/S.5: Societal Engagement and Impacts — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
Since the early 90's, constellations of satellite sensors have been regularly imaging our planet, proving scientific community accurate measurements of the Earth's environment (atmosphere, ground surface, sea water). In the last years, a significant increase of available optical and radar scenes has been recorded, thanks to the new generation of satellite constellations which guarantee frequent repetition times and a high spatial coverage of the measurements. In particular, tons of TByte of data have been collected through the EU Sentinels sensors. These data represent today a unique opportunity to develop new methods for the understanding of complex natural phenomena (such as the dynamics of big earthquakes, volcanic eruptions, landslide, severe flooding episodes, severe storm and hurricanes, glacier melting) and human-induced processes (deforestation, urban growth and the land-use-changes, air pollution, environmental and industrial impacts) that pose threats to people aggrieved by the present-day global climate change. The products of these investigations increase the knowledge  of natural and human-induced disasters that affect the population and are helpful for local authorities and the national government to prevent such disaster through design, maintain and manage effective disaster plans. 
Earth Observation (EO) technologies can play a significant role in the disaster mitigation and preparedness phases, by guaranteeing the development of extended investigations on the state of the observed scenes, at the search of precursor signals of potential disasters, forecasting future evolutions of natural phenomena as well as identifying critical conditions. In this context, the development and the use of Advanced Algorithms based on Machine Learning (ML) represents the new frontier of the processing of EO data. The principal international agreements towards the development of resilient society to disasters are the SENDAI framework for disaster risk reduction 2015-2030, the sustainable development goals (SDG) and the Paris agreement on climate change. The planning and maintenance of these actions can benefit from accurate and up-to-date information on the state of living Earth as derived from remote sensing investigations. Nowadays, and more and more in the forthcoming years, remote-sensed data products are exploited for the actual implementation of these agreements towards development of resilient societies, complementing the knowledge of the complex, dynamic nature of the socio-cultural-ecological systems in which they are implemented. Studies that foster the development of robust approaches capable of anticipating challenges, promoting novel, context-appropriate responses and expected future development gains are urgently required.
Show/Hide Description
CCS.120: Integrated studies of geohazards and space weather with multi-sensor observations
D/S.5: Societal Engagement and Impacts — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
This session expands on the latest results from cross-disciplinary observations from space and ground measurements associated with major geohazards: earthquakes and volcanoes, and space weather. It advances the existing interdisciplinary studies of the Earth's electromagnetic (EM) environment associated with lithosphere-atmosphere-ionosphere coupling (LAIC) and other processes. The presentations will feature a multi-instrumental approach to global EM observations from ground to space. Regarding the latter kind of observations, it was recognized that satellite technologies provide new opportunities never achieved before to study the geospheres coupling associated with geohazards from space. Data from LEO satellites can provide a global view of near-Earth space variability and are complementary to ground-based observations that are limited in global coverage. Results from the latest satellite missions Swarm (ESA, 2013), CSES1 (China/Italy, 2018), and FORMOSAT-7/COSMIC-2 (Taiwan/USA, 2019), which were specifically designed to investigate ionospheric anomalies related to geohazards and space weather will be presented. We are considering a wide range of observable Earth EM environment activities by using ground-based observations and LEO satellites. Such activities as - thunderstorms, lightning, TLE, geomagnetism, and space weather can help clarify the missing scientific knowledge about earthquake processes and significant volcanic eruptions. The session talks will include but are not limited to the latest major geohazard events such as 2022 Hunga Tonga–Hunga Ha’apai eruptions; modeling and analyses; geochemical; electromagnetic; thermodynamic processes; and case histories relating to stress changes in the lithosphere, geohazards, and space weather.
Show/Hide Description
CCS.121: JPSS and the global Low Earth Observing (LEO) satellite constellation: evolving towards a synergistic and heterogenous constellation
D/S.5: Societal Engagement and Impacts — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
The JPSS Program has built and maintains strong partnerships with other international satellite programs.  This enables these programs to use polar-orbiting capabilities to provide decision makers what they need to make life saving decisions in response to extreme weather and natural disasters.  With the expected launch and commissioning of the JPSS-2 satellite, the JPSS mission will have three satellites with identical sensors that provide a resilient and robust constellation that not only provides critical observations not only for operational forecasts of oceans, land and atmosphere, but also enables the continuity of critical climate data records. The NOAA satellites when teamed with EUMETSAT’s METOP series, ESA’s Sentinel satellites and the new NASA missions that are planned to be launched in the next few years results in an impressive amount of observations and products available for users for their operational use. 

The LEO Session will focus on the synergy of measurements from JPSS and other partner missions.  Special emphasis will be on blended products using data from multiple satellites, both geostationary and polar-orbiting.  The post launch calibration and validation of the JPSS-2 sensors will also be presented.
Show/Hide Description
CCS.122: Latest Developments on Earth Observation for Risk and Disaster Management Applications Using Reflectometry
D/S.5: Societal Engagement and Impacts — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
The United Nations (UN) designed in 2015 the Sustainable Development Goal (SDG) number 13 “to take urgent actions for the climate crisis.” As highlighted in this SDG, understanding Earth’s climate is key for making political decisions to combat climate change. Constant monitoring of all the environmental climate variables (ECVs) defined by the Global Climate Observing System (GCOS) is required to better model our planet’s climate cycles and be more prepared for natural and anthropogenic disasters. These ECVs are, in general, retrieved from satellite measurements collected from different sets of instruments, such as imagers (e.g., multispectral cameras), radars (monostatic or multistatic), and radiometers. 

In the last decades, a specific type of multistatic radar technique has emerged for satellite remote sensing using signals of opportunity (SoOp), e.g., the Global Navigation Satellite System Reflectometry (GNSS-R). Reflectometry works by receiving the GNSS (or other SoOp) transmitted signals that are reflected from the Earth’s surface. The reflection signal includes information on the scattering surface, allowing the retrieval of different magnitudes, such as ocean surface wind speed, soil moisture, biomass or sea ice properties, among others.

Reflectometry significantly contributes to the observation of disasters such as hurricanes, floods, droughts, harmful algal blooms, fire, contamination of oceans with microplastics, etc. Thanks to its constellation-based approach, reflectometry using GNSS signals or other SoOp can provide very short revisit times. Short revisit times enable studies of natural and anthropogenic disasters, which often require accurate and very recent information. For example, reflectometry enables a better tracking of hurricanes, anticipating and improving management over the affected areas. Reflectometry also enables a better understanding of what conditions of the soils can trigger floods under different weather circumstances and for how long those surfaces will remain affected after a flood event. Equally, by monitoring soil moisture and understanding biomass conditions, reflectometry facilitates a key indicator of drought condition and fire risk enabling a better understanding on the needs of different soils, and guiding fire risk management and water management policies. Other reflectometry applications targeting marine ecosystem health have been recently proven feasible. Both anthropogenic contamination through microplastics presence on the ocean surface and natural harmful algal blooms can be monitored from space reflectometry measurements enabling a better management of the areas affected.

In this session, selected topics will cover the main contributions of reflectometry to natural and anthropogenic disaster applications. We look forward to interesting discussions among the researchers pioneering on those applications.
Show/Hide Description
CCS.123: Machine learning and remote sensing data for rapid disaster response
D/S.5: Societal Engagement and Impacts — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
Every year, millions of people worldwide are impacted by natural and man-made disasters. Floods, heat waves, droughts, wildfires, tropical cyclones, and tornadoes cause increasingly severe damage. Civil wars and regional conflicts in various parts of the world, moreover, lead to a growing number of refugees and large changes in population dynamics. Rescue forces and aid organizations depend on up-to-date, area-wide, and accurate information about hazard extent, exposed assets, and damages in order to respond fast and effectively. In this regard, emergency mapping has been using remote sensing data for decades to support rescue operations with the required situational awareness. Providing this information in a rapid, scalable, and reliable way, however, remains a major challenge for the remote sensing community.

Commonly used emergency mapping protocols involve a large degree of manual assessment by interpreters, who visually compare remote sensing images of a disaster situation. Therefore, obtaining an area-wide mapping is time-consuming and requires a large number of experienced interpreters. Nowadays, the amount of remote sensing data and related suitable sensors is steadily increasing, making it impossible in practice to assess all available data visually. Therefore, an increase in automation for impact assessment methods using multi-modal data opens up new possibilities for an effective and fast response workflow.

In this session, we want to provide a platform for research groups to present their latest research activities aimed at addressing the problem of automatic, rapid, large-scale, and accurate information retrieval from remotely sensed data to support disaster response. More specifically, the focus lies on deep learning-based approaches for the extraction of relevant information about hazard extent (e.g., flood mapping), exposed assets (e.g., road condition assessment), and impacts (e.g. building damage assessment), acquired from satellite, aerial, or drone platforms. The choice of training data for these tasks is restricted to a limited number of open datasets, which have a major impact on the methods’ performance: in this session, we encourage the presentation of new public benchmark datasets and aim at increasing awareness of existing ones.
Show/Hide Description
CCS.124: Remote Sensing of Marine Pollution and Harmful Algal Blooms
D/S.5: Societal Engagement and Impacts — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
Marine pollution in the form of both natural and anthropogenic oil slicks, harmful algal blooms (HAB), and conglomeration of garbage, particularly microplastics in the ocean and coastal regions are of great concern. Operational ocean surveillance relies heavily on remote sensing data for detection, and ocean circulation models are commonly used to study drift patterns and concentration changes. The rapid increase in remote sensing assets available for operational services like oil spill detection make it both possible as well as even more important to separate the mineral oil slicks from common look-alikes, out of which algae blooms are one of great societal concern. Furthermore, the proliferation of alternative fuels makes identifying and characterizing their signatures in ocean releases important for future marine pollution monitoring. 

Surface oil slicks become a great hazard if they reach coastal or sea ice infested areas, and even relatively low concentration levels of submerged substances, e.g., from produced water releases, can have a harmful effect on marine organisms in the long term. Localization, monitoring, and slick redistribution information from remote sensing techniques is essential for a fast response and effective spill clean-up operation and long-term tracking of ecosystem impact. Mapping of submerged substances relies additionally on ocean circulation modeling and in-situ measurements. Surveillance in general is still manual-labor intensive, though increased availability of free remote sensing images through, e.g., the Sentinel satellites, has opened possibilities for development of automated and semi-automated techniques for slick detection, characterization, and tracking.  

Algal blooms overall vary in terms of their adverse impact, causal organisms, and biomass distribution, and the phytoplankton species known as HAB produce toxins that are a threat to human and animal health. HAB occur frequently in many marine and freshwater reservoirs and are monitored with optical remote sensing, often in combination with IR and SAR. Thick surface layer of HAB in coastal regions discourages recreation, prevents sunlight penetration into the sea, and decreases oxygenation of water, and therefore are potentially harmful to the entire ecosystem and detrimental to the economy of the affected region, making monitoring of interest of many environmental agencies and institutions. 

Within this session we welcome contributions covering all aspects of remote sensing observational identification and characterization of different types of marine pollution, their circulation and drift modelling, as well as further developments in ocean surveillance using a range of satellite and airborne sensors, including but not limited to synthetic aperture radar and optical sensors. Submissions with a focus on observation-model synthesis, interdisciplinary studies, innovative sensing methods and algorithms, and remote sensing of biofuels are encouraged. 
Show/Hide Description
CCS.125: Remotely sensed monitoring and nowcasting of environmental disasters
D/S.5: Societal Engagement and Impacts — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
Numerous studies on the emerging problems of the last decades have shown that the frequency of environmental disasters and their scale are constantly growing, leading to an increasing risk of great losses to the economy and human lives as well as the collapse of social infrastructure.
This explains why it is important to Geoscience and Remote Sensing to gather the current knowledge and open problems on the monitoring and nowcasting efforts in this field.
The information needed to monitor and nowcasting environmental disasters can be obtained using remote sensing, and in-situ measurements and access to knowledge-based historical data. The topics to be discussed in this session are the following: 
• the type of instruments used or planned to be used to perform the ground truth and remote sensing measurements of environmental disasters.
• the cost required to obtain remote sensing and in-situ information on environmental disasters
• the type of mathematical models that can be used to both interpolate and extrapolate temporal and spatial data to increase the reliability of forecasting and nowcasting environmental disasters.
Show/Hide Description
CCS.126: The Fire Sense Initiative: Bringing NASA’s Capabilities to the Operational and Scientific Communities
D/S.5: Societal Engagement and Impacts — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
With large increases in wildfires and the related impacts to life and property, NASA’s Earth Science Division is working with other Agency partners to contribute the best available science and technology to
help operational agencies overcome current barriers to more efficient and effective wildland fire management. Under its Fire Sense initiative, NASA’s unique capabilities in fire science and technology are being codeveloped with other agencies
that are responsible for wildland fire management. Doing so will help ensure that such
capabilities are of utility to our nation and all wildland fire management agencies and are thus
ultimately implemented into the wildland fire management paradigm. The capabilities cover broad areas of remote sensing, predictive modeling, and data management. Through innovation NASA will help society thrive in a rapidly-changing
wildland fire environment. In this session we will showcase some of the key technology and science projects that have been active during the first year of Fire Sense. Ultimately, by working closely with other agencies as the end users of our products, these projects will
provide important tools and insight to fire management agencies who will ultimately be better
prepared to manage future wildfires through improved predictive modelling, risk assessments,
prescribed burn management and wildfire suppression, as well post-fire mitigation efforts.
Show/Hide Description
CCS.127: The Value of Environmental Satellite Data
D/S.5: Societal Engagement and Impacts — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
Environmental Satellite Systems are a very expensive and vital part of a government's budget. It is increasingly crucial to quantify the value of the benefits derived from these systems in order to plan the procurement of their next generations. In recent years, several studies have been conducted to monetize the socioeconomic benefits of several national or regional environmental satellite systems (e.g., GOES-R, GeoXO, JPSS and Metop) and evaluate whether the benefits outweigh the lifecycle costs of those systems.

In this session, economists and sector representatives (e.g., government, private, academic) discuss methods and findings for analyses assessing the socioeconomic value of environmental satellite systems.
Show/Hide Description
CCS.128: Advancing Sustainability Through Remote Sensing and Artificial Intelligence
D/S.7: Societal Engagement and Impacts — Remote Sensing for Sustainable Development
Sustainability is increasingly becoming a mainstream topic and an imperative for action for societies, governments and businesses.  United Nations has listed 17 sustainable development goals (SDGs) designed to be the "shared blueprint for peace and prosperity for people and the planet, now and into the future".  These goals encompasses both socio-economic and environmental aspects for sustainable development. To realize and support these goals a confluence of multiple technologies along with participation of all the diverse stakeholders would be required. 

Remote sensing is among the key technologies that offers cost-effective solution to observe our planet in greater detail at a global scale. When augmented with Artificial Intelligence, the remote sensing data has a huge potential to address sustainability challenges, use-cases and develop new applications.  Over the last decade, there has been tremendous advances in remote sensing and AI technologies, be it new sensor technologies and platforms, increased coverage, improved spatio-temporal resolution, ease in data access as well as user-friendly AI libraries to develop models, allowing the development of diverse use-cases across multiple sectors.   

The goal of this session is to bring together researchers and practitioners from academia and industry to highlight and demonstrate the inter-disciplinary research in the area of sustainable development. The papers will address various topics at the intersection of remote sensing and AI technologies applied to sustainability use-cases across natural and man-made ecosystems.  We hope this session will foster greater collaboration and exchange between the remote sensing, AI and practitioner communities. This will enable the development of a network of researchers with diverse backgrounds and expertise with a common vision and focus on advancing the current state of R&D efforts in applications related to advancement of sustainable development. A good balance of speakers from industry and academia in the chosen set of papers is expected.  

IGARSS being a premier conference for the remote sensing community provides an ideal platform for researchers and practitioners to present work. This being the case, IGARSS conference, by hosting this proposed session can provide a platform for the community to present the work that applies AI and remote sensing to advance sustainability. In this manner, IGARSS can play a catalyst role in finding solutions to contain global warming and increase sustainable development. 
Show/Hide Description
CCS.129: Honoring Dr. Yuriy Shkvarko: Trends and Advances of Remote Sensing Research in Mexico
D/S.7: Societal Engagement and Impacts — Remote Sensing for Sustainable Development
The Latin American encompasses one of the one of the larger regions in the world, there has been an increased development within their universities and research centers in geoscience and remote sensing technologies, particularly in the context of earth observation, disaster monitoring and risk assessment, and with applications to the preservation of the local heritage, which is of great importance throughout the region. Within Latin America, Mexico has been promoting its Remote Sensing Community aiming to improve the collaboration between researchers and institutions for both, academia and industry.

In 2016, a special issue in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing was prepared to present developments of the remote sensing community in Latin America, covering a large variety of topics such as innovative computational infrastructures for segmentation algorithms, applications to atmosphere and air quality, phytoplankton blooms, plant species identification, analysis of live fuel moisture content, effects of moisture and geometry on the return of L-band SAR data, forest monitoring, time series analysis for deforestation assessment, cover changes, and long-term variability of reservoirs characteristics. 

As the time has been going on, different teams have continue working on using the remote sensing techniques in benefit of their societies. In March 2020, it was planned the 2020 Latin American GRSS & ISPRS Remote Sensing Conference (LAGIRS 2020) in Santiago de Chile, Chile. However, it was canceled due to the spread of the coronavirus in Latin America. Among different works submitted to LAGIRS 2020, several papers were dedicated to monitor natural hazards in Latin countries. In addition, the first meeting of the Community of Latin American and Caribbean States (CELAC) for space cooperation was held virtually in July 2020. One of the main conclusions of this meeting was to build operational regional efforts and help areas affected by natural hazards using satellite information. During this special session, we focused only on the advances of applying remote sensing to monitor damages due to natural hazards. In 2021 a couple of invited sessions titled “Remote Sensing of Natural Hazard in Latin America I and II” were organized within the “2021 IEEE International Geoscience and Remote Sensing Symposium” held in Brussels, Belgium, with the presentation of 10 papers. Finally, in 2022 a couple of invited sessions titled “Research Advances of Remote Sensing through Latin America I and II” were organized within the “2022 IEEE International Geoscience and Remote Sensing Symposium” held in Kuala Lumpur, Malaysia, with the presentation of 10 papers.
Show/Hide Description
CCS.130: Remote sensing for coastal sustainability
D/S.7: Societal Engagement and Impacts — Remote Sensing for Sustainable Development
Human activities and climate change have significantly changed the global environment, ecosystem, economy, and society from various perspectives over the past decades, especially forming a double squeeze and threat on the coastal zone and its sustainable development. According to United Nations, around 40% of the world’s population lives within 100km of the coast. Coastal sustainability has become a crucial component of global sustainable development. This session topic is highly related to multiple themes in IGARSS, O.4 Coastal Zones, D/S.7 Remote Sensing for Sustainable Development, and several other technical themes with new sensors and new methods. However, there is still a lack of such a focus on coastal sustainability with the support of emerging remote sensing technologies.

Recent decades have witnessed coastal reclamation and exploitation, coastal ecosystem and environmental evolution, urban population surge, and urban infrastructure expansion, which brings along different social and environmental impacts, i.e., biodiversity loss, ecosystem fragmentation, and climate change induced vulnerability for human beings. Sustainable coastal development addresses the significance of timely and efficiently monitoring of the urban, ecological and environmental processes together with their related issues in coastal regions, including urban sprawl, transportation systems, green space and wetland, biodiversity, air/water pollution, reclamation and aquaculture, natural disasters, etc. The advanced multisource remote sensing techniques, including airborne and spaceborne optical, SAR, and LiDAR at different resolutions together with in-situ data, can provide fine to coarse multi-angle, multi-scale, and multi-frequency observations for coastal monitoring, supporting their resilience and sustainable development. This session invites original research that presents the advances, methodology, and challenges of monitoring different coastal processes and their related issues in coastal regions using multisource remote sensed data. 
Show/Hide Description
CCS.131: Remote Sensing for Renewable Energy
D/S.7: Societal Engagement and Impacts — Remote Sensing for Sustainable Development
Under the Paris Agreement, a transition to carbon-neutral energy is necessary to meet climate targets by 2050. The energy sector (including electricity, heat, and transport) is the largest emissions driver, accounting for 73.2% of total emissions. Power generation and transport account for more than two-thirds of total emissions and are responsible for almost all the global growth since 2010. Renewable energy is the backbone of any energy transition toward net-zero emissions. Although global demand for renewable energy has grown significantly over the past decade, the transition from conventional to renewable energy is not yet fast enough to put the world on track to achieving carbon neutrality by 2050. Thus, it has become more important to comprehensively understand the benefits of renewable energy development, which provides not only climate-mitigation solutions but also yields socio-economic benefits and environmental benefits. For better utilization of renewable energy, optimal planning, management and effective maintenance of its infrastructures (e.g., photovoltaic roofs, hydroelectric dams, and wind turbines) are becoming increasingly important. Conducting a field investigation can generally obtain reliable data for RE infrastructures but usually suffers from high labor intensity, large time consumption, and expensive costs. In comparison, Remote sensing (RS) is a versatile technology that can obtain earth observation information at various temporal and spatial scales. One or multiple sensors (e.g., photography, infrared, microwave devices or a laser scanner) equipped on certain platform (e.g., satellite, aircraft, unmanned aerial vehicle (UAV) or ground-based) capture surface images of the specified area, the advanced image processing algorithms are then applied for information extraction and knowledge inference.  remote sensing (RS) technologies can provide practical, cost-effective, and relatively objective solutions for observational studies of RE infrastructures such as urban 3D reconstruction, location optimization, spatial distribution estimation, and structural health monitoring; additionally, the breakthrough of artificial intelligence in recent decades (i.e., the great achievement made by deep learning) further enhances RS data processing algorithms in terms of precision and generalization capability. Additionally, according to different electricity demands, renewable energy modules are widely deployed in various scenarios such as building rooftops, cultivated land, mountainous areas, water, and road surfaces. In this context, the advantages of RS in terms of wide observation range and rapid data acquisition become more prominent.
This session focuses on scientific research and technological development with respect to utilizing RS technologies (e.g., aerial/satellite photography, spectral imaging, and radar interferometry techniques) for planning, management, and maintenance of renewable energy infrastructures. Studies focusing on a broader scope of developing RE with RS technologies are also welcome.
Show/Hide Description
CCS.132: Responsible AI4EO
D/S.7: Societal Engagement and Impacts — Remote Sensing for Sustainable Development
The 17 Sustainable Development Goals (SDGs) are the heart of the 2030 Agenda for Sustainable Development, adopted by all United Nations Member States in 2015. Being an urgent call for action by all countries, they recognize that ending poverty and other deprivations must go hand-in-hand with strategies that improve health and education, reduce inequality, and spur economic growth – all while tackling climate change and working to preserve our oceans and forests.

Earth Observation (EO)-derived information is among the most promising tools in supporting the achievement of these goals. It helps us map land cover/uses, understand and better respond to disasters (e.g., floods and wildfires) and assess urban/economic development across the World, to name a few applications.

Artificial Intelligence (AI) also has great potential in itself and in application to EO. In fact, in its various forms (Machine Learning, Deep Learning, reasoning, and visual question answering, to name but a few), AI has been adopted with success in the context of Earth Observation (AI4EO).

Both fields are experiencing a flourishing of ideas and applications, rapid technological development, increasing commoditization of key tools and steep market growth.

However, such potential leaves many open questions:
1. Are EO and AI delivering on their promises in tackling SDGs? And how much?
2. Are there any SDGs that are easier to work with rather than others? We may be tempted to think that it’s easier to apply EO to environmental themes, rather than to those related to human rights, but is it true?
3. In the areas that are supposedly more easily addressed by EO, how much of this potential is tapped?
4. How effective is the support offered by EO? Does it just allow us to look at phenomena from space, or does it empower some more proactive initiatives?
5. What are the obstacles that prevent the adoption of EO and AI? Are they purely technological?

In this session, we invite contributions addressing the role of AI and EO towards the achievement of the UN's SDGs.

We invite diverse applications, such as economic activity and financial assets mapping, advancing urban planning systems and development processes, understanding climate impacts on the environment and society, and training users to employ EO applications in practice.

Also, AI4EO applications that report model uncertainty, demonstrate interpretability and explainability and assess ethical implications are welcome.

Following the increasingly discussed principles of responsible AI, we also invite research on potential algorithmic and data biases in AI for EO and how we can build models that incorporate fairness, social norms and values while ensuring sustainable and inclusive development. To bridge the gap between existing EO expertise that we have built over the years and its benefit in practice, applications that borrow from the human-computer-interaction field are welcome. For example, how do we encourage stakeholders to employ freely available flood maps or wildfire-tracking services during the actual disaster? Involving end-users in the application (co)design and human-in-the-loop AI model development is another area of interest. 
Show/Hide Description
CCS.133: Sustainable Development Goals Through Image Analysis and Earth Observation Data
D/S.7: Societal Engagement and Impacts — Remote Sensing for Sustainable Development
The sustainable development goals adopted by the UN member states provide a framework for action on tackling climate change, promoting prosperity, and people's well-being for a better and sustainable future. The progress toward attaining SDGs is monitored by analyzing data collected from multiple sources (surveys, government agencies, social media etc). In addition, the images acquired from sensors onboard Earth Observation(EO) satellites also provide an opportunity to monitor the Earth's ecosystem and built infrastructure. The EO images can provide continuous temporal information over the globe and can cover most remote areas of the world. Besides, satellite images can improve and complement conventional statistical in-situ data collection, as well as provide new types of environmental information. Image analysis and data fusion methods continue to impact the EO applications such as crop type mapping, mapping of slums and urban areas, sustainable forest management, and disaster monitoring and response. These example efforts are well aligned with the SDGs, such as attaining Zero hunger (SDG 2), Good health and Well being(SDG 3), sustainable cities and communities (SDG 11), climate actions (SDG 13), Life on land(SDG 15), and secure property rights (multiple SDGs). 
This session, proposed as part of IEEE GRSS IADF TC, will focus on the role, opportunities, and challenges of image analysis and fusion-based methods as applied to EO data and thereby contributing to sustainable development goals.
Show/Hide Description
CCS.134: Analysis Ready Data: New Opportunities
D/S.8: Societal Engagement and Impacts — Standardization in Remote Sensing
The term “Analysis Ready Data” varies and has different meanings depending on which user group or/and data provider is using/producing it. Back in 2015, the Committee on Earth Observation Satellites (CEOS) Land Surface Imaging Virtual Constellation (LSI-VC) provided one definition for ARD targeting land products (CEOS Analysis Ready Data for Land - CARD4L). CEOS LSI-VC also created a framework and Product Family Specifications (PFS), which provides the means for data providers to assess and meet requirements for CEOS ARD-compliant datasets. Since then, CEOS Agencies have been working towards these specifications to ensure datasets comply with CEOS ARD specifications. The private sector came on board and is recognising the importance of CEOS ARD; particularly with regard to the value of having Government datasets comparable and interoperable with their own datasets. This session is being organised by CEOS LSI-VC in collaboration with Geoscience Australia and USGS. Its objective is to continue the dialogue between CEOS and the private sector on the ARD topic in particular to explore new opportunities and challenges for data providers regarding user needs, discovery, and access to CEOS ARD compliant datasets. 
Show/Hide Description
CCS.135: Standards Evolution Part 1: Sensors, Derived Products, and Data Fusion (a joint session proposal from the GRSS Standards Committee and ESI TC )
D/S.8: Societal Engagement and Impacts — Standardization in Remote Sensing
The application of remote sensing in the geosciences is undergoing a profound evolution as new sensors are developed and constellations of smallsats are deployed by an increasing number of commercial enterprises. The producers of these data must consider sensor cost/performance tradeoffs and how to encode, characterize and provide access to the output. Users need consistent metrics for evaluating and comparing sensors and derived products as well as product interoperability for combining data from different sensors. Use of new computing techniques for machine learning allows extending space to high dimensionality, i.e., hyperspace.  The creation and adoption of technical standards and best practices is essential in addressing these needs and realizing the full potential of these evolving technologies.

Presentations in this session will examine the benefits of standardization in various aspects of geoscience remote sensing as it pertains to sensor design, operation and production of data derived from such sensors. This session is a companion to “Standards Evolution Part 2: Data Management, Services, and Federation”, which 
focuses more on services supporting access, processing and eventual exploitation of the data.
Show/Hide Description
CCS.136: Standards Evolution Part 2: Data Management, Services, and Federation (joint session by IEEE GRSS ESI and Standards Committees)
D/S.8: Societal Engagement and Impacts — Standardization in Remote Sensing
Earth Data, once collected and cleaned, get stored and served for manifold application purposes where they have gained immense societal impact over the years in domains like agriculture and food security, climate change, disaster mitigation, cities and communities sustainability, and further critical societal goals. Recently, technical platforms in support of such goals have started converging under the common concept of Digital Twins and with more and more integration of AI methods for potentially enhanced insight.

A variety of standards exists, issued by different bodies, and they tend to vary significantly with regards to interoperability of implementations, harmonization with related standards, practical usability, availability of compliance test suites, and in the end uptake by implementers, service providers, and users.

Any decision about what standards to adopt affect architecture, tool selection, and ultimately service quality and the value of insights gained. As such, their wise choice is of critical importance for government authorities, the private sector, and even academia.

In this session, key important standard families get presented and inspected critically in this respect. All presenters are actively engaged in standardization as editors and co-editors of one or more topical standards addressing Big Data, Digital Twins, Internet of Things, etc. and their interrelation. As such, this session provides a unique opportunity to get informed about the current Big Earth Data standards landscape, to assess relevance and impact of the various standards, and to stay abreast of trends and innovations.

This session focuses on more downstream service and exploitation aspects. It is the companion of "Standards Evolution Part 1: Sensors, Derived Products, and Fusion" which addresses more upstream topics.
Show/Hide Description
CCS.137: Towards standards for data products: Analysis Ready Data, Decision Ready Information, and Geodata Cubes
D/S.8: Societal Engagement and Impacts — Standardization in Remote Sensing
The concept of Analysis Ready Data (ARD) was initially developed by CEOS, the Committee on Earth Observation Satellites, an international interagency member organization coordinating civilian satellite remote sensing activities. CEOS has developed its own definition, CEOS ARD. CEOS ARD are satellite data that have been processed to a minimum set of quality and compatibility requirements, then  published in a form that maximizes interoperability between datasets collected at different times or on different platforms. The goal is to facilitate data analysis and exploitation with the least user  “data wrangling”. CEOS has developed a number of ARD product family specifications and has set up an ARD Oversight Group to manage the development of CEOS ARD specifications. CEOS, not being a standardization organization, has also realized the importance of transitioning ARD specifications to formal status  via international standard bodies such as OGC and ISO. Both of these organizations are currently collaborating with CEOS in working towards coordinated, internationally recognized ARD standards suites that can be incorporated into governmental and organizational procurements and conformance processes. 
ARD standards are just one aspect of making remote sensing data more FAIR (Findable, Accessible, Interoperable, Reusable). Other aspects include recipes and workflows for Decision Ready Information (ARD integrated and processed to support the needs of a particular application domain), and organization of large dataset collections into geo datacubes, i.e., information models that index heterogeneous source data within a common set of orthogonal dimensional axes. 
This session will explore the role and latest achievements of standardization in these contexts. It will provide a platform to discuss the way forward within and across multiple standardization bodies with specific standardization strategies and next steps. 
Show/Hide Description
CCS.138: Active Remote Sensing of Global Change
D/S.9: Societal Engagement and Impacts — Remote Sensing for Climate Change Impacts
Changes to the climate are driving significant and visible changes to the global landscape, hydrosphere, and biosphere. 

In the Arctic, rapid and unprecedented changes are occurring as a result of air temperatures increasing at approximately four times the average global rate. These changes include, but are not limited to, the melting of glaciers and ice sheets, thawing of permafrost,  rapid coastal erosion, and retreat. In the Mid-latitudes, increasingly volatile weather patterns cause drought, flooding, increased frequency and magnitude of storms, and extreme temperatures leading to cold snaps and heat waves. Globally, changes in vegetation phenology and productivity, surface hydrology and runoff, precipitation regimes, and increases in wildfire frequency and intensity are significantly modifying terrestrial landscapes.

Advances in the effective measurement and monitoring of surface topography, water, and vegetation structure are driven by developments in active remote sensing instruments and techniques. These instruments are capable of providing measurements of global water storage, land elevation and deformation, soil moisture, water level, vegetative canopy water fraction, and vegetative structure at novel spatial and temporal scales.

We encourage submissions that advance our scientific understanding of the myriad physical processes associated with climate change through the use of observed or simulated data from sensors including, but not limited to InSAR, polarimetry, scatterometry, altimetry, laser mass gravimetry, GNSS, and LiDAR, demonstrating advancements in monitoring terrestrial landscapes, topography, ecosystems, and hydrology. This session also seeks to address challenges and uncertainties related to 3- and 4-D resolution mapping through the use of data fusion and novel algorithms applied to active remote sensing instruments.
Show/Hide Description
CCS.139: Boosting Remote Sensing Applications for local Adaptation to Climate Change
D/S.9: Societal Engagement and Impacts — Remote Sensing for Climate Change Impacts
Climate change is affecting every country on every continent. It is affecting lives and disrupting economies. The urgency of tackling climate change and shifting our society towards sustainable methods of production and of consumption has become one of the greatest challenges for mankind. To contribute to these rising environmental concerns, Earth Observation satellites have proved to be a formidable stepping stone, allowing us to accurately understand and monitor our planet, assess climate variables and depict change. However, despite the growing number of accessible satellite data and the interest raised by these perspectives in the public and the private sectors, a bridge has yet to be built to allow local decision-makers and the wider audience to make the most of these approaches for adaptation to climate change. 

To try and bring these two scales together, the Space for Climate Observatory (SCO) is a unique initiative to support the emergence of projects enabling climate adaptation at a local level. As an international network, the SCO aims at developing projects built upon existing cloud computing infrastructures and satellite data to tackle specific issues – from extreme events management to biodiversity monitoring –, using new methodologies and providing operational tools directly to the users. These tools are designed in collaboration with these users, to answer specific needs and to allow the wide audience and decision-makers to leverage satellite data to help their communities move towards greater adaptation. Besides, the projects are designed to be readily adapted to suit other geographic areas facing the same environmental stakes.

The objectives of this special session is to show some very operational applications on how processing remote sensing and in situ data for monitoring the impacts of climate change. Some of them have emerged through the SCO initiative. 
Show/Hide Description
CCS.140: Cryosphere hazards in the HKH in the wake of climate change
D/S.9: Societal Engagement and Impacts — Remote Sensing for Climate Change Impacts
Cryosphere in the Hindu-Kush-Himalayan (HKH) region is a source of freshwater, ecosystem services, and sustainable livelihood for around 240 million people residing in this region. It also contributes to ten Asian river systems and provides water for drinking, irrigation, and hydropower generation. It is a geographically active region, vulnerable to natural hazards such as landslides and debris flow, caused by erosion of land along steep mountain slopes. The ongoing climate change and global warming has further exacerbated the situation. Glaciers have been melting, resulting in formation of new glacial lakes that are potentially dangerous and can cause glacial lake outburst floods (GLOFs), with catastrophic impacts on the livelihoods of downstream local communities. Furthermore, the emission of short-lived climate pollutants such as black carbon also contributes to glaciers melting by decreasing surface reflectance of sunlight and increasing air temperature.    

Snow cover and permafrost are important components of the HKH cryosphere. People in mountainous areas are primarily dependent on snowmelt for freshwater distribution which is mostly affected by seasonal variations in the snow cover. Remote sensing based snow measurements such as snow covered area, snow covered fraction, and snow cover duration can be used to monitor these seasonal variations. It can also be quantified through in situ measurements of snow water equivalent (SWE), and snow depth. However, availability of in situ observations is a major constraint on cryosphere related research in this region. Unlike snow cover, permafrost cannot be readily observed in remote sensing data. Knowledge about distribution and characteristics of permafrost is necessary because it highly impacts the ground characteristics such as mechanical strength and hydraulic permeability. 

Despite the importance of HKH, less is known about variability of the cryosphere in this region. Researchers have recently identified growing risks of GLOFs, as well as glacier detachments and hanging glaciers, and cascaded GLOFs. Over the last few decades, there has been more research in this region, but a lot more research needs to be dedicated to the HKH region. There are still large gaps in our understanding and technical ability to model the key environmental processes and cause-effect relationships driving the cryosphere’s response to climate change.

This community contributed session is proposed with an objective to enhance development of new methods to monitor climate change-induced cryosphere changes across the HKH region using remote sensing data and in situ measurements. Applied research relevant to hazard monitoring is also encouraged.
Show/Hide Description
CCS.141: EO applied to Climate Security scenarios
D/S.9: Societal Engagement and Impacts — Remote Sensing for Climate Change Impacts
Climate Security refers to climate-related events that amplify existing risks in society, endangering the security of humans, key infrastructures, economy, or ecosystems. This topic has become of high relevance in the last years to support decision-making processes related to the safety and security of societies from a global perspective, even increasing in importance in the recent days due to the rapidly changing geopolitical scenarios that require fast and informed adoption of decisions. 

Developments associated to Climate Security require a deep knowledge of two main domains: climate and security, to identify and establish links between the events happening in each of them, with the final goal of modeling these links to achieve a holistic understanding of the scenarios, to act in accordance with the findings and even to predict and prevent potential negative impacts in the future. 

To establish these links between events in a reliable and useful way, it is of utmost importance to consider both the needs of decision-makers and the appropriate set of data sources. For the latter, Earth Observation (EO) is recognized as a key source that can provide high-value and insightful information, due to its unique capability for continuous monitoring at a global scale in a short time, while also enabling long-term trend analysis to detect anomalies with, for instance, time series.

While EO data is a key pillar for the study of these complex scenarios, it also raises new challenges that must be addressed to exploit the possibilities offered. From a technological point of view, current advanced technologies (e.g. AI, Big Data) have to be tailored to work with the huge and heterogeneous amount of EO data. From a scientific point of view, new models and algorithms need to be investigated and developed to reflect the reality and therefore to guarantee the provision of trustful and valuable information for decision makers. 

In conclusion, the study of Climate Security supposes a challenge for the Geoscience and Remote Sensing communities that requires joint effort and collaboration to design and implement new solutions adapted to the different situations. 

This session will focus on how traditional analysis and approaches are evolving to new advanced concepts and initiatives to address several different scenarios incorporating EO data. The presentations will include initiatives addressing food security or water security among others. Some concrete examples are the assessment of conflicts raised or migration risk due to climate events with the main objective of identifying indicators that could enable the creation of conflict pre-warning maps or the impact of uncontrolled / illegal activities (e.g. illegal underwater extraction) that could derive in the medium-long term in difficulties to access basic resources or in the irreparable damage of critical infrastructures or ecosystems.
Show/Hide Description
CCS.142: Inclusivity in Climate Change Research
D/S.9: Societal Engagement and Impacts — Remote Sensing for Climate Change Impacts
Climate Change Research Inclusivity is important for improving quantitative and qualitative insights into climate change research initiatives, projects, and programs. Recent research and outreach initiatives have highlighted the benefits of bottom-up multi-actor stakeholder and open data science approaches in Earth Observation Science (EOS)-driven climate change research that directly incorporate those that are going to be impacted by it most: local, native, and indigenous communities. Active community engagement is often key for open mapping initiatives and for fostering community and citizen science data inputs and to provide additional insights on the implications and potential complications related to hard data driven geoscience and remote sensing observation of climate change. Share with us your inclusive climate change research and the approaches that you have taken to increase citizen, local, native, and indigenous community engagement to increase the potential and outreach of your climate change research. 
Show/Hide Description
CCS.143: Monitoring flood impacts using satellite remote sensing and machine learning - I
D/S.9: Societal Engagement and Impacts — Remote Sensing for Climate Change Impacts
Floods which constitute around half of all extreme events are increasing, exposing a larger population to a higher risk of loss of livelihood and property. Therefore, mapping historical flood events, near real-time flood monitoring, and flood damage assessment are crucial for mitigating this current scale of loss. Satellite imagery offers a unique perspective to monitor flood events by measuring their near real-time extent and duration, and assessing the damage and post-disaster recovery. The current portfolio of public and satellite sensors with different measurement principles presents an opportunity for monitoring flood events at multiple scales and perspectives. In fact, a diverse array of sensor data is a requirement for producing reliable and accurate maps. There is a significant push among all stakeholders toward taking advantage of available sensors, methods such as Machine Learning, and cloud computing to produce near real-time flood maps. However, there are several challenges in flood monitoring with respect to how different sources of data are combined, techniques used for mapping floods, validating these methods for increased reliability, and operationalizing this process for prompt availability. Most importantly, we need to effectively disseminate our combined knowledge of flood maps for immediate support to the first responders and for an equitable post-disaster damage assessment. The Remote sensing Environment, Analysis and Climate Technologies Technical Committee (REACT TC) within the IEEE Geoscience and Remote Sensing society is a group of scientists and engineers specialized in using state of the art in remote sensing to mitigate the impacts of climate change and to help in achieving the United Nations sustainable development goals.  This invited session, sponsored by the REACT TC, aims to showcase how various stakeholders in the earth sciences and impact organizations are utilizing sensors, frameworks, and flood data to produce knowledge for flood assessment, mitigation, and recovery. We hope to foster collaborations to focus on converting sensor data to drive action.
Show/Hide Description
CCS.144: Monitoring flood impacts using satellite remote sensing and machine learning - II
D/S.9: Societal Engagement and Impacts — Remote Sensing for Climate Change Impacts
Floods which constitute around half of all extreme events are increasing, exposing a larger population to a higher risk of loss of livelihood and property. Therefore, mapping historical flood events, near real-time flood monitoring, and flood damage assessment are crucial for mitigating this current scale of loss. Satellite imagery offers a unique perspective to monitor flood events by measuring their near real-time extent and duration, and assessing the damage and post-disaster recovery. The current portfolio of public and satellite sensors with different measurement principles presents an opportunity for monitoring flood events at multiple scales and perspectives. In fact, a diverse array of sensor data is a requirement for producing reliable and accurate maps. There is a significant push among all stakeholders toward taking advantage of available sensors, methods such as Machine Learning, and cloud computing to produce near real-time flood maps. However, there are several challenges in flood monitoring with respect to how different sources of data are combined, techniques used for mapping floods, validating these methods for increased reliability, and operationalizing this process for prompt availability. Most importantly, we need to effectively disseminate our combined knowledge of flood maps for immediate support to the first responders and for an equitable post-disaster damage assessment. The Remote sensing Environment, Analysis and Climate Technologies Technical Committee (REACT TC) within the IEEE Geoscience and Remote Sensing society is a group of scientists and engineers specialized in using state of the art in remote sensing to mitigate the impacts of climate change and to help in achieving the United Nations sustainable development goals. This invited session, sponsored by the REACT TC, aims to showcase how various stakeholders in the earth sciences and impact organizations are utilizing sensors, frameworks, and flood data to produce knowledge for flood assessment, mitigation, and recovery. We hope to foster collaborations to focus on converting sensor data to drive action.
Show/Hide Description
CCS.145: Quantum Technology for Remote Sensing
S/I.15: Sensors, Instruments and Calibration — Advanced Future Instrument Concepts
Show/Hide Description
CCS.146: Preparing for the Copernicus Imaging Microwave Radiometer (CIMR)
S/M.2: Mission, Sensors and Calibration — Spaceborne Passive Microwave Missions
The Copernicus Imaging Microwave Radiometer (CIMR) expansion mission is one of the six Copernicus Expansion Missions currently being implemented by the European Space Agency and the European Commission. CIMR will provide high-spatial resolution microwave imaging radiometry measurements and derived products with global coverage and sub-daily revisit in the polar regions and adjacent seas to address Copernicus user needs. The primary instrument is a conically scanning low-frequency, high spatial resolution multi-channel microwave radiometer. Two satellites are being implemented (to be launched sequentially) each with a design lifetime of 7.5 years and sufficient fuel to last for up to 12 years (thus providing up to ~20 years of continuous data) with a first launch anticipated in 2028/29. A dawn-dusk orbit has been selected to fly in coordination with MetOp-SG-B1 allowing collocated data from both missions to be obtained in the Polar regions within +/-10 minutes.  A conical scanning approach utilizing a large 8m diameter deployable mesh reflector with an incidence angle of 55 degrees results in a large swath width of ~2000 km.  This approach ensures 95% global coverage each day with a single satellite and no hole at the pole in terms of coverage. Channels centred at L-, C-, X-, Ku- and Ka-band are dual polarised with effective spatial resolution of < 60 km, ≤ 15 km, ≤ 15 km and < 5 km (both Ka- and Ku-band with a goal of 4 km) respectively.  Measurements are obtained using both a forward scan and a backward scan arc. On board processing is implemented to provide robustness against radio frequency interference and enables the computation of modified 3rd and 4th Stokes parameters for all channels. This solution allows many Level-2 geophysical products to be derived over all earth surfaces including sea ice (e.g. concentration, thickness, drift, ice type, ice surface temperature) sea surface temperature, sea surface salinity, wind vector over the ocean surface, snow parameters, soil moisture, land surface temperature, vegetation indices, and atmospheric water parameters amongst others.  In preparation for the CIMR mission, this session will focus on encouraging international and cross-disciplinary collaborative activities in the cryosphere, ocean, land and atmosphere domains.
Show/Hide Description
CCS.147: NASA PACE Mission
S/M.4: Mission, Sensors and Calibration — Spaceborne Hyperspectral Missions