- Home
- Getting Started
- Documentation
- Release Notes
- Tour the Interface
- Tour the Layers
- JMARS Video Tutorials
- Lat/Lon Grid Layer
- Map Scalebar
- Nomenclature
- Crater Counting
- 3D
- Shape Layer
- Mosaics
- Map
- Advanced/Custom Maps
- Graphic/Numeric Maps
- Custom Map Sharing
- Stamp
- THEMIS
- MOC
- Viking
- CRISM Stamp Layer
- CTX
- HiRise
- HiRISE Anaglyph
- HiRISE DTM
- HRSC
- OMEGA
- Region of Interest
- TES
- THEMIS Planning
- Investigate Layer
- Landing Site Layer
- Tutorials
- Video Tutorials
- Displaying the Main View in 3D
- Finding THEMIS Observation Opportunities
- Submitting a THEMIS Region of Interest
- Loading a Custom Map
- Viewing TES Data in JMARS
- Using the Shape Layer
- Shape Layer: Intersect, Merge, and Subtract polygons from each other
- Shape Layer: Ellipse Drawing
- Shape Layer: Selecting a non-default column for circle-radius
- Shape Layer: Selecting a non-default column for fill-color
- Shape Layer: Add a Map Sampling Column
- Shape Layer: Adding a new color column based on the values of a radius column
- Shape Layer: Using Expressions
- Using JMARS for MSIP
- Introduction to SHARAD Radargrams
- Creating Numeric Maps
- Proxy/Firewall
- JMARS Shortcut Keys
- JMARS Data Submission
- FAQ
- Open Source
- References
- Social Media
- Podcasts/Demos
- Download JMARS
DTM Generation
Machine Learning Generated Terrains from Remote Sensing Data on Mars
*For best results: use a map that is either grayscale or has “natural” coloring (avoid “colorized” products (ex: MOLA Colorized Elevation))
Source Datasets:
· CTX Global Mosaic v1 @ 128ppd (MSFF Generated)
· HRSC/MOLA Blended DEM Global v2 (296ppd)
Methodology:
· Using visual imagery of a lower resolution than the elevation data would result in a model that produces the highest quality elevation
Training Dataset Generation:
· Extracted 1024x1024 pixel tiles from each source dataset covering the same area
o 1,800 image pairs generated from randomly generated x/y offset points
§ 205 image pairs were excluded because either a) the CTX image had more than 0.5% (by pixel count) contiguous pure black (missing sections in the mosaic) or b) the CTX image was determined to be more than 35% “similar” to another image in the dataset[1]
§ Model was trained on 1,595 image pairs
o HRSC/MOLA elevation data was re-scaled between 0 and 39,586 so that the numeric values were relative to the image, and comparable across the whole dataset (39,586 is the absolute range of the data in the HRSC/MOLA DEM)
§ There’s no visual indication of the actual elevation of features in a given scene, without this normalization the model generated numeric values would be arbitrary and have no relation to “true” elevation.
ML Model:
· Trained to 425 Epochs
· Generated using “Pix2PixCC”[2] “an improved model for image-to-image translation and scientific data analysis”
· “It uses … correlation coefficient (CC) values between the real and generated data. The model consists of three major components: Generator, Discriminator, and Inspector. The Generator and Discriminator are networks which get an update at every step with loss functions, and the Inspector is a module that guides the Generator to be well trained computing the CC values. The Generator tries to generate realistic output from input, and the Discriminator tries to distinguish the more realistic pair between a real pair and a generated pair. The real pair consists of real input and target data. The generated pair consists of real input data and output data from the Generator. While the model is training, both networks compete with each other and get an update at every step with loss functions. Loss functions are objectives that score the quality of results by the model, and the networks automatically learn that they are appropriate for satisfying a goal, i.e., the generation of realistic data. They are iterated until the assigned iteration, which is a sufficient number assuring the convergence of the model.”[2]
JMARS Specific Implementation Notes:
· The model framework only operates on square images of equal size and equal number of bands as those used during training.
· Images from JMARS are adapted:
o Converted into single band grayscale
o Placed within a 1024x1024 pixel, scaling larger images down
o Images are padded with “mirrors” of the input image to minimize generation artifacts at the edges
o Generated images are cropped and/or scaled to match the input image from JMARS