Presented by Ronny Hänsch, German Aerospace Center (DLR); Marc Rußwurm, Ecole Polytechnique Fédérale de Lausanne (EPFL Valais); Ribana Roscher, Research Center Jülich; and Claudio Persello, University of Twente
Location: Room 107
Despite the wide and often successful application of machine learning techniques to analyze and interpret remotely sensed data, the complexity, special requirements, as well as selective applicability of these methods often hinders using them to their full potential. The gap between sensor- and application-specific expertise on the one hand, and a deep insight and understanding of existing machine learning methods on the other hand often leads to suboptimal results, unnecessary or even harmful optimizations, and biased evaluations. On the other hand, after the initial success of Deep Learning in Remote Sensing, new topics arise that address remaining challenges that are of particular importance in Earth Observation applications including interpretability, domain shifts, and label scarcity.
The tutorial aims
To this aim the tutorial will be divided in four individual sessions:
Suitable for PhD students, research engineers, and scientists. Basic knowledge of machine learning is required.
Presented by Michael Mommert, Joëlle Hanna, Linus Scheibenreif, and Damian Borth, University of St. Gallen
Location: Room 106
Deep Learning methods have proven highly successful across a wide range of Earth Observation (EO)-related downstream tasks, such as image classification, image-based regression and semantic segmentation. Supervised learning of such tasks typically requires large amounts of labeled data, which oftentimes are expensive to acquire, especially for EO data. Recent advances in Deep Learning provide the means to drastically reduce the amount of labeled data needed to train models with a given performance and to improve the general performance of these models on a range of downstream tasks. As part of this tutorial, we will introduce and showcase the use of three such approaches that strongly leverage the multi-modal nature of EO data: Data Fusion, Multi-task learning and Self-supervised Learning. The fusion of multi-modal data may improve the performance of a model by providing additional information; the same applies to multi-task learning, which supports the model in generating richer latent representations of the data by means of learning different tasks. Self-supervised learning enables the learning of rich latent representations based on large amounts of unlabeled data, which are ubiquitous in EO, thereby improving the general performance of the model and reducing the amount of labeled data necessary to successfully learn a downstream task. We will introduce the theoretical concepts behind these approaches and provide hands-on tutorials for the participants utilizing Jupyter Notebooks. Participants, who are required to have some basic knowledge in Deep Learning with Pytorch, will learn through realistic use cases how to apply these approaches in their own research for different data modalities (Sentinel-1, Sentinel-2, land-cover data, elevation data, seasonal data, weather data, etc.). Finally, the tutorial will provide the opportunity to discuss the participants’ use cases in detail.
This tutorial presents Deep Learning methods and approaches that are of increasing importance for the analysis of the vast amounts of EO data that are available. The knowledge of the methods and approaches presented in this tutorial enable researchers to achieve better results and/or to train their models on smaller amounts of labeled data, therefore potentially boosting the scientific output of the IGARSS community. While not being a ”beginner level” tutorial, this proposed tutorial seamlessly builds upon previous tutorials (e.g., “Machine Learning in Remote Sensing - Theory and Applications for Earth Observation” at IGARSS 2020, 2021 and 2022), but does not require participants to have participated in either of these previous tutorials. Finally, the content of this proposed tutorial will be presented in such a way that it is accessible and relevant to researchers utilizing EO data across a wide range of disciplines and for a variety of applications and use cases.
Participants are required to have basic knowledge in Deep Learning and experience with the Python programming language and the PyTorch framework for Deep Learning.
Presentation slides containing theoretical concepts and Jupyter Notebooks containing code examples and hands-on exercises will be provided through GitHub.
Participation in the hands-on tutorials will require a Google account to access Google CoLab and the participants’ personal Google Drive; participants should have at least 1GB of free space on their Google Drive. Alternatively, participants can run the provided Jupyter Notebooks locally on their laptops.
Presented by Gabriele Cavallaro, Rocco Sedona, Jülich Supercomputing Centre (JSC); Manil Maskey, NASA; Iksha Gurung and Muthukumaran Ramasubramanian, University of Alabama
Location: Room 208
Recent advances in remote sensors with higher spectral, spatial, and temporal resolutions have significantly increased data volumes, which pose a challenge to process and analyze the resulting massive data in a timely fashion to support practical applications. Meanwhile, the development of computationally demanding Machine Learning (ML) and Deep Learning (DL) techniques (e.g., deep neural networks with massive amounts of tunable parameters) demand for parallel algorithms with high scalability performance. Therefore, data intensive computing approaches have become indispensable tools to deal with the challenges posed by applications from geoscience and Remote Sensing (RS). In recent years, high-performance and distributed computing have been rapidly advanced in terms of hardware architectures and software. For instance, the popular graphics processing unit (GPU) has evolved into a highly parallel many-core processor with tremendous computing power and high memory bandwidth. Moreover, recent High Performance Computing (HPC) architectures and parallel programming have been influenced by the rapid advancement of DL and hardware accelerators as modern GPUs.
ML and DL have already brought crucial achievements in solving RS data classification problems. The state-of-the-art results have been achieved by deep networks with backbones based on convolutional transformations (e.g., Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs)). Their hierarchical architecture composed of stacked repetitive operations enables the extraction of useful informative features from raw data and modeling high-level semantic content of RS data. On the one hand, DL can lead to more accurate results when networks are trained over large annotated datasets. On the other hand, deep networks pose challenges in terms of training time. In fact, the use of large datasets for training a DL model requires the availability of non-negligible time resources.
In case of supervised ML/DL problems, a hybrid approach is required where training is performed in HPC and inference with new data is performed in a cloud computing environment. This hybrid approach minimizes the cost of training and optimization of real-time inference. This approach also allows for transitioning a research ML model from HPC to a production ML model in a cloud computing environment—a pipeline that simplifies complexity of putting ML models in practice.
The theoretical parts of the tutorial provide a complete overview of the latest developments of HPC systems and Cloud Computing services. The participants will understand how the parallelization and scalability potential of HPC systems are fertile ground for the development and enhancement of ML and DL methods. The audience will also learn how High-throughput computing (HTC) systems make computing resources accessible and affordable via Internet (cloud computing) and that they represent a scalable and efficient alternative to HPC systems for particular ML tasks.
For the practical parts of the tutorial, the attendees will receive access credentials to work with the HPC systems of the Jülich Supercomputing Centre and AWS Cloud Computing resources. To avoid waste of time during the tutorial (e.g., setting of environments from scratch, installation of packages) the selected resources and tools will be set up in advance by the course organizers. The participants will be able to start working on the exercises directly with our implemented algorithms and data.
The participants will work through an end-to-end ML project where they will train a model and optimize it for a data science use case. They will first understand how to speed-up the training phase through state-of-the-art HPC distributed DL frameworks. Finally, they will use cloud computing resources to create a pipeline to push the model into the production environment and evaluate the model against new and real-time data.
Machine learning and deep learning background. Advanced knowledge of classical machine learning algorithms, Convolutional Neural Networks (CNNs), Python programming with basic packages (Numpy, Scikit-learn, Matplotlib) and DL packages (Pytorch and/or TensorFlow). Each participant has to bring a laptop (with Windows, Mac, or Linux).
Presented by Prof. Dr. Mrinalini Kochupillai, Technical University of Munich; Dr. Conrad Albrecht, German Aerospace Center; Dr. Matthias Kahl, Burak Ekim, Bundeswehr University (Munich); and Isabelle Tingzon, Technical University of Munich
Location: Room 212
Overview: This tutorial will be based on a tutorial paper recently accepted by the Geoscience and Remote Sensing Magazine, titled “Earth Observation and Artificial Intelligence: Understanding Emerging Ethical Issues and Opportunities” Kochupillai et al., 2022 (Authors: Mrinalini Kochupillai, Matthias Kahl, Michael Schmitt, Hannes Taubenböck and Xiaoxiang Zhu). Ethics is a central and growing concern in all applications utilizing Artificial Intelligence (AI) and Big Data. Earth Observation (EO) or Remote Sensing (RS) research relies heavily on both Big Data and AI or Machine Learning (ML). While this reliance is not new, with increasing image resolutions and the growing number of EO/RS use-cases that have a direct impact on governance, policy, and the lives of people, ethical issues are taking center stage. In this tutorial, we provide scientists engaged with AI4EO research (i) a practically useful overview of the key ethical issues emerging in this field with concrete examples from within EO/RS to explain these issues, (ii) a first road-map (flowchart) and questionnaire that scientists can use to identify ethical issues in their ongoing research, (iii) examples of novel ethical opportunities that EO/RS research can help realize, and (iv) 6 case-studies that participants can work on in small groups during the tutorial to get a hands-on experience of identifying and overcoming ethical issues in their daily research. This tutorial will bring ethics from the abstract high-level concepts that are challenging to understand and apply, down to the “desktops” of EO domain experts. This approach will sensitize scientists to issues of ethics that are already alive in their daily research, thus creating a bridge for constructive and regular communication between remote sensing scientists and ethics researchers.
Tutorial Segments: The full-day tutorial will be split into two broad segments:
Morning Session: In the morning session, our team will explain the basic concepts of ethics, including prominent ethical duties and guidelines, using popular examples as well as concrete illustrations from within EO/RS research fields. Thereafter, the method of using the flowchart and questionnaire developed by Kochupillai et al. 2022 will be explained using one example from EO/RS research. The goal is to help scientists identify ethical issues and opportunities in the early stages of their research. The session will be a combination of: (i) lecture-style presentations, (ii) the Socratic method of teaching (i.e., a series of questions posed to participants to determine their current level of knowledge), and (iii) open “brainstorming” so the participants can be enriched with each other’s knowledge and understanding.
Post-Lunch Session: In the one hour before and immediately after lunch, the team will present six case studies deriving from ongoing or common EO/RS research fields (20 minutes per case study). Thereafter, all participants will be randomly assigned to 6 groups and each group will be assigned one of the presented case studies. The groups will then have 60 minutes to discuss the case study among themselves and use the content/knowledge gained in the morning session to (i) identify the most prominent ethical risks and opportunities in their case study, (ii) brainstorm on approaches that can be adopted to overcome or minimize the ethical risks and maximize ethical opportunities, and (iii) present their findings to the bigger group.
The tutorial provides remote sensing scientists a fresh perspective on several prominent EO/RS research fields. It also sheds light on how approaching these fields with an “ethically mindful eye” can help scientists design and execute novel state-of-the-art research capable of contributing to the UN Sustainable Development Goals without compromising ethics.
The Six Case Studies for the post-lunch session will be on the following topics:
Tentative Structure of the Tutorial Sessions:
At the end of the tutorial, participants will:
No prerequisites. Any/all participants who have a background in Earth Observation, Remote Sensing, Artificial Intelligence/Machine Learning, Ethics, or Science Communication are welcome to join the tutorial and will learn/benefit from participation.
Presented by Nikolaos-Ioannis Bountos, Ioannis Prapas, Spyros Kondylatos, Maria Sdraka, and Ioannis Papoutsis, Orion Lab, Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing, National Observatory of Athens
Location: Room 101
The rapid advances of Deep Learning (DL) combined with the diverse and massive amount of freely available earth observation (EO) data open new paths for monitoring and predicting the risk of natural hazards and assessing their impact. Monitoring (e.g. providing early warnings or estimating the extent of the disaster promptly), as well as risk prediction (e.g. forecasting disasters) can prove to be crucial for decision making, allowing for improved emergency response, potentially reducing the casualties and negative effects. This tutorial will provide a complete guide on the subject, starting from foundational ideas and data handling, to state-of-the-art artificial intelligence methods dealing with a diverse set of natural hazards including wildfires, volcanic activity, floods and earthquakes. The ultimate goal is to attract researchers and geoscientists to work on such crucial tasks and equip them with the necessary tools and knowledge needed to tackle them. In particular, the tutorial will cover applications of DL for all the three stages of the emergency management cycle (forecasting and preparedness, early warning, monitoring and impact assessment), covering the curation of spatio-temporal datasets and presenting common problems in the context of DL for Natural Hazards management, such as lack of labels and naturally occurring class imbalance as well as methods to work around them. Finally, as emergency management requires action, the last part of the tutorial will cover methods that enhance the interpretability of the models with focus on explainable AI and Bayesian methods that provide an estimate of uncertainty. The tutorial will cover different types of remote sensing datasets, including multi-spectral, synthetic aperture (SAR) data, interferometric SAR data along with meteorological, landscape and other geospatial data.
This tutorial aims to train the attending researchers on the use of state-of-the-art artificial intelligence methods to develop early warning, forecasting and monitoring systems for natural hazards using multi-modal remote sensing datasets. The tutorial will be split into two parts. The first part will focus on theoretical aspects, common problems, workarounds, tips and tricks. The second part will involve the demonstration of SoTA methods through Python Jupyter notebooks that can be run by the participants. Due to the nature of DL algorithms requiring significant time to train, toy dataset examples and/or pretrained models will be prepared.
The tutorial will cover the following subjects in particular:
We assume basic knowledge of ML/DL methods, python and earth observation.
In this tutorial we will provide working examples in the form of Jupyter notebooks along with the necessary data. The users are required to have a laptop equipped with a modern browser. The python libraries needed will be shared before the tutorial.
Presented by Dr. Franz J. Meyer, University of Alaska Fairbanks; Heidi Kristenson, Alaska Satellite Facility
Location: Room 211
Are you interested in using Synthetic Aperture Radar (SAR) data, but don’t know where to start? This 4-hour course will provide an introduction to the fundamentals of SAR acquisition and dataset characteristics, explore access to freely available datasets, discuss and demonstrate image interpretation, present a range of research and monitoring applications of SAR data using products such as Radiometric Terrain Corrected (RTC) imagery and interferometric SAR (InSAR) datasets, and inform users of new datasets and resources on the horizon.
SAR sensors can image the earth’s surface regardless of cloud conditions or light availability, and can measure very small amounts of surface deformation. This makes SAR a valuable and versatile tool for mapping surface processes during natural disasters, monitoring and quantifying surface deformation caused by geological processes, and providing reliable acquisitions in areas with persistent cloud cover.
With new missions such as NASA-ISRO SAR (NISAR) on the horizon, and existing missions such as Sentinel-1 providing reliable time series data, this is an exciting time to learn more about the properties and processing techniques behind SAR. Many potential users encounter barriers when trying to use SAR, as most datasets require additional processing before they are useful for analysis, and users must understand some of the fundamental concepts about SAR acquisition and image processing to successfully interpret SAR datasets.
This tutorial will help attendees move beyond these barriers and engage them in the exploration of SAR data and workflows. We will provide insight into the key concepts of SAR acquisition and image generation, demonstrate freely available resources and services that provide access to analysis-ready products, and dig into workflows such as flood mapping and change detection that attendees can easily implement without extensive SAR expertise or specialized software. This course will be focused on learning SAR fundamentals and exploring basic analysis workflows using user-friendly interfaces.
[This tutorial can serve as a stand-alone session for those interested in learning SAR basics, but will also lay the foundation for a second half-day tutorial (SAR at Scale) focused on programmatic workflows using available services, open-source python packages and a JupyterLab environment to process large volumes of SAR data.]
None. This course is geared towards those with little to no experience with SAR.
Presented by Giuseppe Scarpa and Matteo Ciotola, University Federico II of Naples (I)
Location: Room 207
In the last decade, Deep Learning (DL) has heavily impacted on so many Remote Sensing applications, determining a change of paradigm where data are regarded as primary source of knowledge as opposed to engineered and explainable models. Data fusion applications, at pixel, feature or decision levels, are among the most interested by such paradigm shift toward data-driven solutions.
Despite its promise to offer solutions with unprecedented performances, DL has also opened or, at least, reinvigorated, a not lesser important problem which is the quality assessment, since generalization issues can easily occur if any DL solution is not properly validated and tested.
Within this frame, to be concrete and convey useful information to the attendees, in this tutorial we will focus on a specific pixel-level fusion problem which is pansharpening, a particular instance of multi-resolution fusion. Pansharpening is a process which combines a lower resolution multispectral image with a higher resolution panchromatic band to provide the missing high resolution multispectral image. In particular, this tutorial will provide theoretical and practical elements to develop convolutional neural networks for pansharpening, not forgetting quality assessment and generalization issues.
Basic knowledge of machine/deep learning and of the Python programming language. The knowledge of Python tools such as PyTorch or TensorFlow may also help.
Presented by Prof. James L Garrison, Purdue University; Prof. Adriano Camps, Universitat Politechnica de Catalunya (UPC); and Dr. Estel Cardellach, Institute of Space Sciences (ICE-CSIC/IEEC)
Location: Room 105
Although originally designed for navigation, signals from the Global Navigation Satellite System (GNSS), ie., GPS, GLONASS, Galileo and COMPASS, exhibit strong reflections from the Earth and ocean surface. Effects of rough surface scattering modify the properties of reflected signals. Several methods have been developed for inverting these effects to retrieve geophysical data such as ocean surface roughness (winds) and soil moisture.
Extensive sets of airborne GNSS-R measurements have been collected over the past 20 years. Flight campaigns have included penetration of hurricanes with winds up to 60 m/s and flights over agricultural fields with calibrated soil moisture measurements. Fixed, tower-based GNSS-R experiments have been conducted to make measurements of sea state, sea level, soil moisture, ice and snow as well as inter-comparisons with microwave radiometry.
GNSS reflectometry (GNSS-R) methods enable the use of small, low power, passive instruments. The power and mass of GNSS-R instruments can be made low enough to enable deployment on small satellites, balloons and UAV’s. Early research sets of satellite-based GNSS-R data were first collected by the UK-DMC satellite (2003), Tech Demo Sat-1 (2014) and the 8-satellite CYGNSS constellation (2016). Future mission proposals, such as GEROS-ISS (GNSS ReEflectometry, Radio-Occultation and Scatterometry on the International Space Station) and GNSS Transpolar Earth Reflectometry exploriNg System (G-TERN) will demonstrate new GNSS-R measurements of sea surface altimetry and sea ice cover, respectively. Availability of spaceborne GNSS-R data and the development of new applications from these measurements, is expected to increase significantly following launch of these new satellite missions and other smaller ones (ESA’s PRETTY and FFSCAT; China’s FY-3E; Taiwan’s FS-7R).
Recently, methods of GNSS-R have been applied to satellite transmissions in other frequencies, ranging from P-band (230 MHz) to K-band (18.5 GHz). So-called “Signals of Opportunity” (SoOp) methods enable microwave remote sensing outside of protected bands, using frequencies allocated to satellite communications. Measurements of sea surface height, wind speed, snow water equivalent, and soil moisture have been demonstrated with SoOp.
This half-day tutorial will summarize the current state of the art in physical modeling, signal processing and application of GNSS-R and SoOp measurements from fixed, airborne and satellite-based platforms.
After attending this tutorial, participants should have an understanding of:
Basic concepts of linear systems and electrical signals. Some understanding of random variables would be useful.
Presented by Anca Anghelea, ESA; Manil Maskey, NASA; Naoko Sugita, JAXA; Barry Leffer, NASA; Claudia Vitolo, ESA; Shinichi Sobue, JAXA; Daniel Santillan, EOX IT Services GmbH; Alessandro Scremin, Rhea Group; and Leo Thomas, Development Seed
Location: Room 104
The European Space Agency (ESA), Japan Aerospace Exploration Agency (JAXA), and National Aeronautics and Space Administration (NASA) have combined their resources, technical knowledge, and expertise to produce the Earth Observing (EO) Dashboard (https://eodashboard.org), which strengthens our global understanding of the changing environment with human activity.
The EO Dashboard is an Open Science initiative of the three agencies. It presents EO data from a variety of EO satellites, served by various ESA, NASA, and JAXA services and APIs in a simple and interactive manner. Additionally, by means of scientific storytelling, it brings the underlying science closer to the public providing further insights into the data and their scientific applications.
The EO Dashboard makes use of various EO Platform services and is developed on top of the Euro Data Cube (EDC) infrastructure (https://eurodatacube.com).
This tutorial intersects with the three agencies' open science vision and also GRSS TC's EO data infrastructure initiatives.
The tutorial is self contained and participants will be provided with the necessary information and support to run all exercises.
A basic level of knowledge is expected in the following fields:
Participants will be working in a JupyterLab Workspace in the Euro Data Cube (EDC) environment. Access to the EDC services will be provided free of charge by ESA.
Free trial subscriptions to EDC can be used at any time to get familiarized with the platform, but specific tutorial notebooks and other material will only be available in the free environment provided for the tutorial. All Jupyter Notebooks used in the tutorial will also be openly available in GitHub.
To use the EDC services participants need to register for an EDC account using a specific EVENT URL announced prior to the tutorial. Upon registration, a tailored workspace is made available either immediately or at the start of the event. A notification via email is sent to participants once the workspace is available at https://hub.eox.at.
Presented by Chelle Gentemann, Yvonne Ivey-Parker, Cynthia Hall, Isabella Bello Martinez, Paige Martin, NASA Headquarters
Location: Room 207
This is the first module in NASA’s new open science curriculum. Complete all five modules to earn NASA’s open science certification. The other four modules are available through virtual cohorts, summer schools, and online.
Open-source science, conducting science openly from project initiation through implementation, increases access to knowledge, expands opportunities for new voices to participate, and thereby accelerates discovery. The result is the inclusion of a wider, more diverse community in the scientific process as close to the start of research activities as possible. This increased level of commitment to conducting the full research process openly and without restriction enhances transparency and reproducibility, which engenders trust in the scientific process. It also represents a cultural shift that encourages collaboration and participation among practitioners of diverse backgrounds, including scientific discipline, gender, ethnicity, and expertise.
Success, however, depends on all of us working to change the paradigms and frameworks from which we operate. This is why NASA is pursuing an open-source science ethos; open-source science embraces the principles of open science and activates it in a way that unlocks the full potential of a more equitable, impactful, efficient, scientific future.
To spark change and inspire open science engagement, NASA has created the Transform to Open Science (TOPS) mission and declared 2023 as the Year Of Open Science.
But what does open science look like in practice? How does it lead to better results? How does it foster more diverse and inclusive scientific communities and research practices? In this tutorial, we shall introduce the core principles of open science.
Learners will become familiar with concrete examples of the benefits of open science, and be provided with resources to further open science in their own research. The session will include best practices for building open science communities, increasing collaboration, and introducing open principles to project design, as well as an overview of open science norms. This tutorial will also explore the historical impact of “closed” science, and how open science seeks to create a more diverse and equitable scientific community.
Ultimately, participating in this tutorial provides dialogue and training around open science, while creating a community which designs its scientific endeavors to be open from the start. It supports OMB Office of Management and Budget (OMB) revised Circular A-130, “Managing Information as a Strategic Resource,” and other federal agency policies (e.g., NASA’s new scientific information policy SPD-41). This tutorial will advance that information policy by helping the scientific community understand how to openly share their data, software, and results for maximum scientific impact.
By the end of this tutorial, learners will understand:
None
Presented by Dr. Forrest F. Williams, Alex Lewandowski, Alaska Satellite Facility; Dr. Franz J. Meyer, University of Alaska Fairbanks; and Dr. Joseph H. Kennedy, Alaska Satellite Facility
Location: Room 211
Synthetic Aperture Radar (SAR), with its capability of imaging day or night, ability to penetrate dense cloud cover, and suitability for interferometry, is a robust dataset for event/change monitoring. Sentinel-1 SAR data is freely available globally and can be used to inform scientists and decision-makers interested in geodynamic signals or dealing with natural and anthropogenic hazards such as floods, earthquakes, deforestation, and glacier movement.
The IGARSS community is typically well-versed in SAR and related processing algorithms, which often require complex processing and specialized software to generate analysis-ready datasets. However, the sheer volume of data provided by the current Sentinel-1 mission, along with new data from upcoming missions such as NISAR and ROSE-L, and projects such as OPERA, can quickly become overwhelming. ESA's Sentinel-1 mission has produced ~10PB of data since launch in 2014, and the upcoming NASA-ISRO SAR (NISAR) mission is expected to produce ~100PB of data in just two years. These large datasets and dense time series will make working with SAR data in the cloud an appealing, if not necessary adaptation. Unfortunately, very few platforms currently exist for working with SAR data in the cloud, and even scientists well-versed in scientific programming may find the tools used to work in a cloud-computing framework unfamiliar and difficult to understand.
This 4-hour course will explore using advanced SAR remote sensing data and methods within a cloud-computing ecosystem to advance the community’s capacity to process large volume SAR data at scale. We will explore two scientific use cases in depth, using freely-available open SAR data, services, and tools.
Attendees will learn how to use OpenSARLab, ASF’s cloud-hosted JupyteHub computing environment, to perform a variety of SAR analyses in a programmatic environment suited to big data analytics. This will include basics such as SAR data search and discovery, and how to interact with ASF’s on-demand SAR processing services. Attendees will use these tools to perform multiple publication-quality analyses including an RTC-based change detection analysis to identify landslide occurrence following the August 2021 Haitian earthquake, and an InSAR time-series analysis that recreates the recent discovery of volcanic activity at the Mt Edgecumbe volcano in Alaska. The time-series analysis will utilize the Miami InSAR Time-series in Python (MintPy) package, an open-source tool for performing InSAR time-series analyses, that is quickly becoming a community standard. After attending this conference, attendees will have learned how to work within a cloud-hosted JupyterHub environment, use ASF’s open-source Python tools to obtain data, and perform SAR change-detection and time-series analyses.
This course assumes some basic familiarity with SAR fundamentals including data acquisition and image formation and the basics of interpreting SAR imagery.
For new users, the “Getting Started with SAR” half-day tutorial will provide the necessary foundation for this course.
Attendees will need to bring a laptop able to connect to the internet in order to participate in the hands-on tutorials.
Participants will be provided with:
Presented by Dinh HO TONG MINH, INRAE
Location: Room 101
About every twelve days, most areas on Earth can be imaged by the European Space Agency’s SAR Copernicus Sentinel-1 program. In the coming years, it may reveal tiny changes on every patch of Earth daily. Unlike optical technology, which produces the best images on sunny days, Sentinel-1 takes its snapshots actively by using radars, penetrating clouds, and working at night. Comparing SAR images from the same position at different times can reveal surface movements with millimeters accuracy. The technique is known as SAR Interferometry or InSAR. Recently, Persistent Scatterers and Distributed Scatterers (PSDS) InSAR and Compressed SAR (ComSAR) algorithms have been implemented as an open-source TomoSAR package (https://github.com/DinhHoTongMinh/TomoSAR). Even though the topic can be challenging, this tutorial makes it much easier to understand. In detail, this tutorial will explain how to use PSDSInSAR and ComSAR techniques on real-world Sentinel-1 images with user-oriented (no coding skills required!) open-source software (e.g., ISCE, SNAP, TomoSAR, and StaMPS). After a quick summary of the theory, the tutorial presents how to apply Sentinel-1 SAR data and processing technology to identify and monitor ground deformation. After one half-day of training, participants will understand the background of radar interferometry and be able to produce a time series of ground motion from a stack of SAR images.
After one half-day of training, participants will learn to access SAR data. Understand the theory of InSAR processing. Form interferograms and then interpret the revealed ground motions from the interferogram. Understand the concept of extracting ground motion time series from a stack of SAR images.
The software involved in this tutorial is open-source. For the InSAR processor, we will use the ISCE/SNAP to process Sentinel-1 SAR images. We then work on TomoSAR/StaMPS for time series processing. Google Earth will be used to visualize geospatial data.
Presented by Gladimir V.G. Baranoski, University of Waterloo (Canada), School of Computer Science, Natural Phenomena Simulation Group
Location: Room 105
Predictive computer models, in conjunction with in situ experiments, are regularly being used by remote sensing researchers to simulate and understand the hyperspectral responses of natural materials (e.g., plants, soils, snow and human tissues), notably with respect to varying environmental stimuli (e.g., changes in light exposure and water content). The main purpose of this tutorial is to discuss theoretical and practical issues involved in the development of predictive models of light interactions with these materials, and point out key aspects that need to be addressed to enhance their efficacy. Furthermore, since similar models are used in other scientific domains, such as biophotonics, tissue optics, imaging science and computer graphics, just to name a few, this tutorial also aims to foster the cross-fertilization with related efforts in these fields by identifying common needs and complementary resources. The presentation of this tutorial will be organized into five main sections, which are described as follows.
Section 1. This section provides the required background and terminology to be employed throughout the tutorial. It starts with an overview of the main processes involved in the interactions of light with matter. A concise review of relevant optics formulations and radiometry quantities is also provided. We also examine the key concepts of fidelity and predictability, and highlight the requirements and the benefits resulting from their incorporation in applied life sciences investigations.
Section 2. It has been long recognized that a carefully designed model is of little use without reliable data. More specifically, the effective use of a model requires material characterization data (e.g., thickness and water content) to be used as input, supporting data (e.g., absorption spectra of material constituents) to be used during the light transport simulations, and measured radiometric data (e.g., hyperspectral reflectance, transmittance and BSSDF (bidirectional surface scattering distribution function)) to be used in the evaluation of modeled results. Besides their relative scarcity, most of the measured radiometric datasets available in the literature often provide only a scant description of the material samples employed during the measurements, which makes the use of these datasets as references in comparisons with modeled data problematic. When it comes to a material’s constituents in their pure form, such as pigments, data scarcity is aggravated by other practical issues. For example, oftentimes their absorption spectra is estimated either through inversion procedures, which may be biased by the inaccuracies of the inverted model, or does not take into account in vivo and in vitro discrepancies. In this section, we address these issues and highlight recent efforts to mitigate them.
Section 3. For the sake of completeness and correctness, one would like to take into account all of the structural and optical characteristics of a target material during the model design stage. However, even if one is able to fully represent a material in a molecular level, as we outlined above, data may not be available to support such a detailed representation. Hence, researchers need to find an appropriate level of abstraction for the material at hand in order to balance data availability, correctness issues and application requirements. Moreover, no particular modeling design approach is superior in all cases, and regardless of the selected level of abstraction, simplifying assumptions and generalizations are usually employed in the current models due to practical constraints and the inherent complexity of natural materials. In this section, we address these issues and their impact on the efficacy of existing simulation algorithms.
Section 4. In order to claim that a model is predictive, one has to provide evidence of its fidelity, i.e., the degree to which it can reproduce the state and behaviour of a real world material in a measurable manner. This makes the evaluation stage essential to determine the predictive capabilities of a given model. In this section, we discuss different evaluation approaches, with a particular emphasis on quantitative and qualitative comparisons of model predictions with actual measured data and/or experimental observations. Although this approach is bound by data availability, it mitigates the presence of biases in the evaluation process and facilitates the identification of model parameters and algorithms that are amenable to modification and correction. In this section, we also discuss the recurrent trade-off involving the pursuit of fidelity and its impact on the performance of simulation algorithms, along with strategies employed to maximize the fidelity/cost ratio of computer intensive models.
Section 5. The development of predictive light interaction models offers several opportunities for synergistic collaborations between remote sensing and other scientific domains. For instance, predictive models can provide a robust computational platform for the “in silico” investigation of phenomena that cannot be studied through traditional “wet” experimental procedures. Eventually, these investigations can also lead to the model enhancements. In this final section, we employ case studies to examine this iterative process, which can itself contribute to accelerate the hypothesis generation and validation cycles of research in different fields. We also stress the importance of reproducibility, the cornerstone of scientific advances, and address technical and political barriers that one may need to overcome in order to establish fruitful interdisciplinary collaborations.
This tutorial builds on the experience gained during the development of first-principles light interaction models for different organic and inorganic materials. The lessons learned through this experience will be transparently shared with the attendees. The attendees will be introduced to essential biophysical concepts and simulation approaches relevant for the development of such models, as well as to the key role played by fidelity, predictability and reproducibility guidelines in this context. Moreover, in order to develop models following these guidelines, a scientifically sound framework should be employed. This brings the main learning objective of this tutorial, namely to provide attendees with a “behind the scenes” view of the different stages of this framework, namely data collection, modeling and evaluation. More specifically, theoretical and practical constraints that need to be addressed in each of these stages will be broadly discussed. These discussions will be illustrated by examples associated with openly accessible light interaction models. Besides providing attendees with a foundation for the enhancement of their own predictive light interaction models and the development of new ones, this tutorial also aims to bring to their attention the wide range of scientific contributions and technological advances that can be elicited by the use of these models.
The intended audience includes graduate students, practitioners and researchers, from academia, scientific organizations and industry. Participants will be exposed to practical issues which are usually not readily available in the related literature. The proposed tutorial assumes a familiarity with basic optics concepts and radiometric terms. Experience with Monte Carlo methods would be helpful, but not required.