Navcam data sets
Image, MSL MARS NAVIGATION CAMERA EDR VERSION 1.0
Mars Science Laboratory Mars Navigation Camera Experiment Data Record
Data Set Overview
This data set contains images acquired by the Navcams for the support of rover traverse planning, post-traverse assessment, rover localization, operation of the robotic arm, and the selection of science targets. Navcam images will be commanded via ground commands from Earth. The Navcams will be extensively used to acquire end of drive 360-degree panoramas. These panoramas will serve as the primary set of data from which Mastcam and ChemCam targets will be chosen. The Navcam panoramas will also be extensively used for rover traverse planning and generally cover more of the local Martian surface terrain than the Mastcam or Hazcam cameras. The data set is similar to the data set for Navcams on MER [MAKIETAL2003].
This data set uses the Committee on Data Management and Computation (CODMAC) data level numbering system. The MSL Navcam EDRs are considered Level 2 or Edited Data (equivalent to NASA Level 0). The EDRs are reconstructed from Level 1 or Raw Data. They are assembled into complete images, but are not radiometrically or geometrically corrected.
The rover computer will generate Navcam telemetry data products that have some metadata appended to them by the rover. If requested in the command sequence, the rover computer will apply an ICER (lossy or lossless) or LOCO (lossless) image compression algorithm to the image data.
After receipt on Earth, processing at JPL will begin with the reconstruction of packetized telemetry data (raw telemetry packet SFDUs with CCSDS headers) into depacketized binary data products and associated Earth metadata files. Then the Multi-mission Image Processing Lab at the Jet Propulsion Laboratory will process the data products and metadata files along with SPICE kernels provided by NAIF and generate the EDRs. The EDRs produced will contain raw uncalibrated data and will have attached operations labels and detached PDS-compliant labels. The MSL Camera EDR/RDR Data Products SIS describes the formatting of the Navcam EDRs.
Full Frame EDR
Full Frame EDRs are digitized to 12 bits resolution and stored as 16-bit signed integers. The binary data may be returned as 12-bit or 8-bit scaled data. If 12-to-8 bit scaling is performed, then pixels are stored in 16-bit format and only the last 8 bits of the 16-bit integer are used.
Thumbnail EDRs are stored as 16-bit signed integers or 8-bit unsigned integers. If 12-to-8 bit scaling is performed, then pixels are stored in 16-bit format and only the last 8 bits of the 16-bit integer are used. The Thumbnail EDR is a spatially downsampled version of the original acquired source image (all EDRs for a particular image are derived from the same source image). For example, in the case of subframe image products, the thumbnail image represents the full-frame contextual image from which the subframe image data was extracted. Note that the original acquired image is not always downlinked, and in some cases thumbnails are the only returned data for a particular observation. The main purposes of a Thumbnail EDR are to provide image previews and contextual information, both at a very low data volume compared to the original image.
Sub-frame EDRs are a subset of rows and columns of the 1024 x 1024 full frame image. Sub-frame EDRs are stored as 16-bit signed integers. If 12-to-8 bit scaling is performed, then pixels are stored in 16-bit format and only the last 8 bits of the 16-bit integer are used.
A downsampled EDR is a smaller version of the 1024 x 1024 full frame or subframed image using the following methods: 1) nearest neighbor pixel averaging, 2) pixel averaging with outlier rejection or 3) computing the median pixel value. Downsampled EDRs are stored as 16-bit signed integers. If 12-to-8 bit scaling is performed, then pixels are stored in 16-bit format and only the last 8 bits of the 16- bit integer are used.
Navcam camera downlink processing software used by the science and engineering team during operations is focused on rapid reduction, calibration, and visualization of images in order to make discoveries, to accurately and expeditiously characterize the geologic environment around the rover, and to provide timely input for operational decisions concerning rover navigation and Robotic Arm target selection.
Navcam images can be viewed with the program NASAView, developed by the PDS and available for a variety of computer platforms from the PDS web site http://pds.jpl.nasa.gov/tools/nasa-view.shtml. There is no charge for NASAView.
The data set will be delivered and made available to the public through the Planetary Data System web sites.
Image, MSL MARS NAVIGATION CAMERA 5 RDR MOSAIC V1.0
Mars Science Laboratory Navigation Camera Mosaic Images, Reduced Data Record, Version 1.0
Data Set Overview
This data set contains images where multiple frames are mosaicked into a single RDR product. The methods for this process are applied by the Multimission Instrument Processing Lab (MIPL) under the Operational Product Generation Subsystem (OPGS), associating projections with the mosaicking process. It should be noted that governing methods and software can differ between OPGS and other operations subsystems or science instrument teams. For additional information about mosaic processing, including the purpose and content of ancillary files, consult the MSL_CAMERA_SIS.PDF in the DOCUMENT directory of the archive volume.
Below is a high level description of the types of mosaics produced to support MSL rover operations:
- Cylindrical Projection: images are overlaid onto azimuth and elevation grid lines. In this case each pixel represents a fixed angle in azimuth and elevation. Rows are of constant elevation in the selected Mars coordinate frame. Optionally, individual frame boundaries may be super-imposed and annotated by number.
- Camera Point Perspective: a perspective projection from a synthetic camera, behaving as if the camera had a much larger field of view. Point-perspective mosaics give the most natural view of small areas, and are suitable for stereo viewing, but cannot be used for large fields of view. For MSL, this type of mosaic is typically computed in Rover Frame, and thus may have a tilted horizon if the rover was not level.
- Cylindrical-Perspective Projection: used for large stereo panoramas, and work across a full 360 degrees of azimuth. Stereo is preserved because a baseline separation is maintained between the camera eyes at different azimuths.
- Polar Projection: provide a quasi-overhead view that still allows viewing all the way to the horizon. Nadir is at the convergent center and the horizon is corrected for lander tilt.
- Vertical Projection: provide a view of the surroundings as if you were looking straight down. They are useful for establishing environmental context or comparing with orbital imagery, but suffer from severe distortion with any variance of the scene from the surface model. In particular, rocks appear elongated, and terrain is not taken into account.
- Orthographic Projection: this type of mosaic is a generalization of the vertical projection. It differs in that an arbitrary axis of projection (as well as X- and Y-axes in the plane of projection) can be specified.
- Orthorectified Projection: used to show a 'true' view of the scene from a different point of view, without distortion due to parallax. The point of view is usually overhead, resulting in an image suitable for comparison with satellite imagery. The removal of parallax leads to gaps in the mosaic, which do not occur in other projections.
- Non-image Mosaics: Normally mosaics are created using imagery, where each pixel is either a raw or radiometrically corrected intensity value. However, mosaics can be created using other types of pixels, e.g. XYZ, surface normal (UVW), range or slope. Any projection may be used, and all output values must be defined in the same coordinate system.
The mosaics are typically 1 or 3 banded 16-bit signed integer or floating point files - the actual format matches the format of the input data. Each mosaic has a dual ODL3/VICAR label attached to the binary file, accompanied by a detached PDS3 label.
The MIPL Mars Program Suite was used to generate these mosaics.
Image, MSL MARS NAVIGATION CAMERA 5 RDR V1.0
Mars Science Laboratory Mars Navigation Camera Reduced Data Records
Data Set Overview
This data set contains derived data products for the MSL Navigation cameras (Navcam) to support rover traverse planning, post-traverse assessment, rover localization, operation of the robotic arm, and the selection of science targets. Most Navcam images were commanded via ground commands from Earth, although a subset of Navcam images were commanded autonomously during rover traverses by the Rover onboard autonomous navigation system. The Navcams were extensively used to acquire end-of-drive 360-degree panoramas. This data set is similar to the reduced data sets for the Navigation cameras on MER [MAKIETAL2003].
Detailed descriptions of all Reduced Data Record (RDR) products are available in the MSL_CAMERA_SIS.PDF, located in the DOCUMENT directory of this volume.
This data set uses the Committee on Data Management and Computation (CODMAC) data level numbering system. The MSL Navcam RDRs are considered CODMAC Level 3 (calibrated data equivalent to NASA Level 1A), CODMAC Level 4 (resampled data equivalent to NASA Level 1B), or CODMAC Level 5 (derived data equivalent to NASA level 3). The RDRs are derived from the Navcam EDR data set and include radiometrically corrected and/or camera model corrected and/or geometrically altered versions of the raw camera data, in single frame form. All of the RDR data products in this dataset have detached PDS labels.
There are dozens of types of RDR products, described in detail in Section 5.1 of the MSL_CAMERA_SIS.PDF. Below are descriptions of the most commonly used RDRs.
Geometrically Corrected (Linearized) Images
EDRs and single-frame RDRs are described by a camera model. This model, represented by a set of vectors and numbers, permit a point in space to be traced into the image plane, and vice-versa.
EDR camera models are derived by acquiring images of a calibration target with known geometry at a fixed azimuth/elevation. The vectors representing the model are derived from analysis of this imagery. These vectors are then translated and rotated based on the actual pointing of the camera to represent the conditions of each specific image. The results are the camera model for the EDR.
The Navcams use a CAHVOR model, while the Hazcams use a more general CAHVORE model. Neither are linear and involve some complex calculations to transform line/sample points in the image plane to XYZ positions in the scene. To simplify this, the images are warped, or reprojected, such that they can be described by a linear CAHV model. This linearization process has several benefits:
- It removes geometric distortions inherent in the camera instruments, with the result that straight lines in the scene are straight in the image.
- It aligns the images for stereo viewing. Matching points are on the same image line in both left and right images, and both left and right models point in the same direction.
- It facilitates correlation, allowing the use of 1-D correlators.
- It simplifies the math involved in using the camera model.
Transformation introduces some artifacts in terms of scale change and/or omitted data (see the references). The linearized CAHV camera model is derived from the EDR's camera model by considering both the left and right eye models and constructing a pair of matched linear CAHV models that conform to the above criteria.
The image is then projected, or warped, from the CAHVOR/CAHVORE model to the CAHV model. This involves projecting each pixel through the EDR camera model into space, intersecting it with a surface (which matters only for Hazcams and is a sphere centered on the camera), and projecting the pixel back through the CAHV model into the output image.
See GEOMETRIC_CM.TXT for additional detail.
Inverse Lookup Table Scaled Products
If the Navcam EDR is in 8-bit format as a result of onboard 12 to 8-bit scaling using a Lookup Table (LUT), then an Inverse LUT is applied to rescale the 8 lowest bits into the 12 lowest bits in the 16-bit signed integer.
Radiometrically Corrected Products
There are three types of radiometrically corrected products, and multiple methods of performing radiometric correction. For more information, see the MSL_CAMERA_SIS.PDF in the DOCUMENT directory of this volume.
- RA products have been corrected to absolute radiance units of W/m^2/nm/steradian.
- RI products have been corrected for instrument effects only, and are in units of DN.
- IO products are radiance factor (I/F) and are dimensionless.
A Disparity file contains 2 bands of 32-bit floating point numbers in the Band Sequential order (line, sample). Alternatively, line and sample may be stored in separate single-band files. The parallax, or difference measured in pixels, between an object location in two individual images (typically the left and right images of a stereo pair) is also called the disparity. Disparity files contain these disparity values in both the line and sample dimension for each pixel in the reference image. This reference image is traditionally the left image of a stereo pair, but could be the right image for special products. The geometry of the Disparity image is the same as the geometry of the reference image. This means that for any pixel in the reference image the disparity of the viewed point can be obtained from the same pixel location in the Disparity image.
The values in a Disparity image are the 1-based coordinates of the corresponding point in the non-reference image. Thus, the coordinates in the reference image are the same as the coordinates in the Disparity image, and the matching coordinates in the stereo partner image are the values is the Disparity image. Disparity values of 0.0 indicate no valid disparity exists, for example due to lack of overlap or correlation failure. This value is reflected in the MISSING_CONSTANT keyword.
An XYZ file contains 3 bands of 32-bit floating point numbers in the Band Sequential order. Alternatively, X, Y and Z may be stored in separate single-band files as a X Component RDR, Y Component RDR and Z Component RDR, respectively. The single component RDRs are implicitly the same as the XYZ file, which is described below. XYZ locations in all coordinate frames for MSL are expressed in meters unless otherwise noted.
The pixels in an XYZ image are coordinates in 3-D space of the corresponding pixel in the reference image. This reference image is traditionally the left image of a stereo pair, but could be the right image for special products. The geometry of the XYZ image is the same as the geometry of the reference image. This means that for any pixel in the reference image the 3-D position of the viewed point can be obtained from the same pixel location in the XYZ image. The 3-D points can be referenced to any of the MSL coordinate systems (specified by DERIVED_IMAGE_PARAMS Group in the PDS label).
Most XYZ images will contain holes, or pixels for which no XYZ value exists. These are caused by many factors such as differences in overlap and correlation failures. Holes are indicated by X, Y, and Z all having the same specific value. This value is defined by the MISSING_CONSTANT keyword in the IMAGE object. For the XYZ RDR, this value is (0.0,0.0,0.0), meaning that all three bands must be zero (if only one or two bands are zero, that does not indicate missing data).
Range (Distance) Files
A Range (distance) file contains 1 band of 32-bit floating point numbers. The pixels in a Range image represent Cartesian distances from a reference point (defined by the RANGE_ORIGIN_VECTOR keyword in the PDS label) to the XYZ position of each pixel. This reference point is normally the camera position as defined by the C point of the camera model. A Range image is derived from an XYZ image and shares the same pixel geometry and XYZ coordinate system. As with XYZ images, range images can contain holes, defined by MISSING_CONSTANT. For MSL, this value is 0.0.
Surface Normal Files
A Surface Normal (UVW) file contains 3 bands of 32-bit floating point numbers in the Band Sequential order. Alternatively, U, V and W may be stored in separate single-band files as a U Component RDR, V Component RDR and W Component RDR, respectively. The single component RDRs are implicitly the same as the UVW file.
The pixels in a UVW image correspond to the pixels in an XYZ file, with the same image geometry. However, the pixels are interpreted as a unit vector representing the normal to the surface at the point represented by the pixel. U contains the X component of the vector, V the Y component, and W the Z component. The vector is defined to point out of the surface (e.g. upwards for a flat ground). The unit vector can be referenced to any of the MSL coordinate systems (specified by the DERIVED_IMAGE_PARAMS Group in the PDS label).
Most UVW images will contain holes, or pixels for which no UVW value exists. These are caused by many factors such as differences in overlap, correlation failures, and insufficient neighbors to compute a surface normal. Holes are indicated by U, V, and W all having the same specific value. Unlike XYZ, (0,0,0) is an invalid value for a UVW file, since they are defined to be unit vectors. Thus there is no issue with the MISSING_CONSTANT as there is with XYZ, where (0.0,0.0,0.0) is valid.
Surface Roughness Maps
The roughness maps contain surface roughness estimates at each pixel in the image, along with a 'goodness' flag indicating whether the roughness meets certain criteria.
For each pixel, the surface normal product defines a reference plane. XYZ pixels in the area of interest are gathered, and the distance to the plane is computed. Minimum and maximum distances from the plane are computed, with outliers excluded. Roughness is defined as the distance between this min and max.
The slope RDR products represent aspects of the slope of the terrain as determined by stereo imaging. There are several slope types, described in further detail in the MSL_CAMERA_SIS.PDF.
Arm Reachability Maps
The Arm Reachability Maps contain information about whether or not the instruments on the arm can reach (either contact or image) the object or location represented by each pixel in the scene. They are derived from the XYZ and Surface Normal products.
A stereo anaglyph is a method of displaying stereo imagery quickly and conveniently using conventional display technology (no special hardware) and red/blue glasses. This is done by displaying the left eye of the stereo pair in the red channel, and displaying the right eye in the green and blue channels. An anaglyph data product simply captures that into a single 3-band color image, which can be displayed using any standard image display program with no knowledge that it is a stereo image. The red (first) band contains the left eye image, while the green and blue (second and third) bands each contain the right eye image (so the right image is duplicated in the file).
Anaglyphs are created manually from CAHV linearized Full Framed stereo pair EDRs or mosaics. Often times the images are stretched prior to creating the anaglyph. After stretching, the images are converted to a VICAR cube, which creates a single multi-band image. The final step involves adding the PDS label.
Navigation camera downlink processing software used by the science and engineering team during operations is focused on rapid reduction, calibration, and visualization of images in order to make discoveries, to accurately and expeditiously characterize the geologic environment around the rover, and to provide timely input for operational decisions concerning rover navigation and Robotic Arm target selection.
Navigation images can be viewed with the program NASAView, developed by the PDS and available for a variety of computer platforms from the PDS web site http://pds.jpl.nasa.gov/tools/nasa-view.shtml. There is no charge for NASAView.
The data set will be delivered and made available to the public through the Planetary Data System web sites.