Microscopic Imager data sets
mission specific
MERx-M-MI-2-EDR-SCI-V1.0
MERx MARS MICROSCOPIC IMAGER SCIENCE EDR VERSION 1.0
The MER MI EDR data set contains raw uncalibrated image data from the MI onboard the rover, used to understand fine-scale, small surface features.
Data Set Overview
This data set contains data to help understand the fine-scale morphology, reflectance, and texture of rock surfaces and soil as well as the accumulation of dust on the capture and filter magnets. Several types of imaging data products can be created onboard the rover. Image data volume can be reduced by summing rows or columns, subframing (or windowing), or downsampling. Because the goal of MI observations is to resolve small features on Mars, row or column summing is not likely to be performed on MI images. However, subframing (selecting a part of the image for downlink) and/or downsampling (calculating a mean or median of pixels in specified blocks) can be used to reduce MI data volume for downlink. Subframe products are defined by starting row and column and by number of rows and columns. Downsampling can be used to create a thumbnail version of an image for rapid downlink and assessment on the ground. If the thumbnail indicates that the image is of scientific interest, the full-resolution image can be later returned to Earth. A histogram of the image data can also be generated and returned to Earth as a separate product. Reference pixels are returned as a separate product if requested.
Processing
This data set uses the Committee on Data Management and Computation (CODMAC) data level numbering system. The MER Camera Payload EDRs are considered Level 2 or Edited Data (equivalent to NASA Level 0). The EDRs are reconstructed from Level 1 or Raw Data, which are the telemetry packets within the project specific Standard Formatted Data Unit (SFDU) record. They are assembled into complete images, but are not radiometrically or geometrically corrected.
Microscopic Imager EDR data products were generated by the Multi-mission Image Processing Lab at the Jet Propulsion Laboratory using the telemetry processing software mertelemproc. The EDRs produced are raw uncalibrated data reconstructed from telemetry packet SFDUs and formatted according to the Camera EDR/RDR Software Interface Specification. Meta-data acquired from the telemetry data headers and a meta-data database were used to populate the PDS label. There will not be multiple versions of a MER Camera Payload EDR. Missing packets will be identified and reported for retransmission to the ground as partial datasets. Prior to retransmission, the missing EDR data will be filled with zeros. The EDR data will be reprocessed only after all partial datasets are retransmitted and received on the ground. In these cases, the original EDR version will be overwritten. The EDR data product will be placed into FEI for distribution.
Data
As the fundamental science image data archive product, the Science EDR will be generated by the Athena Pancam Science and Microscopic Imager Science Teams under SOAS at JPL to recover the original 12-bit raw measurement obtained by the respective science camera to within the uncertainty of the noise in the original measured value. The size of a Science EDR data product is approximately 2 MB. The total estimated volume of Science EDRs over the course of the nominal 90-day MER mission is less than that of the Operations EDRs, and depends on the definition of the Science EDR archive set.
The data packaged in the camera data files will be decoded, decompressed camera image data in single frame form as an Experiment Data Record (EDR). The Full Frame form of a standard image data file has the maximum dimensions of 1024 lines by 1024 samples.
- Full Frame EDR
Full Frame EDRs are stored as 16-bit signed integers. If 12-to-8 bit scaling is performed, then pixels are stored in 16-bit format and only the last 8 bits of eh 16-bit integer are used. - Thumbnail EDR
Thumbnail EDRs are stored as 16-bit signed integers or 8-bit unsigned integers. If 12-to-8 bit scaling is performed, then pixels are stored in 16-bit format and only the last 8 bits of the 16- bit integer are used. The Thumbnail EDR is a sized down version of the original acquired image (i.e., camera returned pixel data), and size of the binary EDR image data is variable. However, the original acquired image is not always downlinked. The main purpose of a Thumbnail EDR is to provide an image summary using a very low data volume compared to the original image. - Sub-frame EDR
Sub-frame EDRs are a subset of rows and columns of the 1024 x 1024 full frame image. Sub-frame EDRs are stored as 16-bit signed integers. If 12-to-8 bit scaling is performed, then pixels are stored in 16-bit format and only the last 8 bits of the 16-bit integer are used.
Software
MER Camera Payload downlink processing software is focused on rapid reduction, calibration, and visualization of images in order to make discoveries, to accurately and expeditiously characterize the geologic environment around the rover, and to provide timely input for operational decisions concerning rover navigation and Instrument Deployment Device (IDD) target selection. Key software tools have been developed at Cornell University, at JPL by the MIPL, SSV, and APSS groups, at NASA Ames, and at the USGS/Flagstaff. These tools can also be used to process MI images (see below), as well as Navcam and Hazcam images, which have substantial scientific potential in addition to their operational importance.
PDS-labeled images and tables can be viewed with the program NASAView, developed by the PDS and available for a variety of computer platforms from the PDS web site. There is no charge for NASAView.
Media/Format
The data set will initially be delivered and kept online. Upon Mission completion, the Microscopic Imager EDRs will be delivered to PDS on DVD.
MERx-M-MI-3-RDR-SCI-V1.0
MERx MI RADIOMETRICALLY CALIBRATED RDR V1.0
The MER MI RDR data set contains radiometrically decalibrated, camera model corrected, and/or geometrically altered raw camera data from the MI onboard the rovers, used to understand fine-scale, small surface features.
Data Set Overview
This data set contains data to help understand the fine-scale morphology, reflectance, and texture of rock surfaces and soil as well as the accumulation of dust on the capture and filter magnets. Several types of imaging data products can be created onboard the rover. Image data volume can be reduced by summing rows or columns, subframing (or windowing), or downsampling. Because the goal of MI observations is to resolve small features on Mars, row or column summing is not likely to be performed on MI images. However, subframing (selecting a part of the image for downlink) and/or downsampling (calculating a mean or median of pixels in specified blocks) can be used to reduce MI data volume for downlink. Subframe products are defined by starting row and column and by number of rows and columns. Downsampling can be used to create a thumbnail version of an image for rapid downlink and assessment on the ground. If the thumbnail indicates that the image is of scientific interest, the full-resolution image can be later returned to Earth. A histogram of the image data can also be generated and returned to Earth as a separate product. Reference pixels are returned as a separate product if requested.
Note: MI Science RDR products generated during the first 30 sols of the MER mission were incorrectly labeled with the wrong data set ID, MERn-M-MI-2-RDR-SCI-V1.0 instead of MERn-M-MI-3-RDR-SCI-V1.0. The latter is the correct data set ID, as the products have level 3 processing.
Processing
MER Camera Payload RDRs are considered Level 3 (Calibrated Data equivalent to NASA Level 1-A), Level 4 (Resampled Data equivalent to NASA Level 1-B), or Level 5 (Derived Data equivalent to NASA Level 1-C, 2 or 3). The RDRs are to be reconstructed from Level 2 edited data, and are to be assembled into complete images that may include radiometric and/or geometric correction.
MER Camera Payload EDRs and RDRs will be generated by JPL's Multimission Image Processing Laboratory (MIPL) under the OPGS subsystem of the MER GDS. RDRs will also be generated by the Athena Pancam Science and Microscopic Imager Science Teams under the SOAS subsystem of the GDS.
RDR data products will be generated by, but not limited to, MIPL using the Mars Suite of VICAR image processing software at JPL, the Athena Pancam Science Team using IDL software at Cornell University and JPL, and the Microscopic Imager Science Team using ISIS software at USGS (Flagstaff) and JPL. The RDRs produced will be processed data. The input will be one or more Camera EDR or RDR data products and the output will be formatted according to this SIS. Additional meta-data may be added by the software to the PDS label.
There may be multiple versions of a MER Camera RDRs.
Data
RDR products generated by MIPL will have a VICAR label wrapped by a PDS label, and their structure can include the optional EOL label after the binary data. RDR products not generated by MIPL may contain only a PDS label. Or, RDR products conforming to a standard other than PDS, such as JPEG compressed or certain Terrain products, are acceptable without a PDS header during mission operations, but may not be archivable.
The RDR data product is comprised of radiometrically decalibrated and/or camera model corrected and/or geometrically altered versions of the raw camera data, in both single and multi-frame (mosaic) form. Most RDR data products will have PDS labels, or if generated by MIPL (OPGS), dual PDS/VICAR labels. Non-labeled RDRs include JPEG compressed products and the Terrain products. The RDR data products that serve operational needs are explained below.
- Radiometrically Corrected RDR The MIPLRAD method is a radiometric correction performed by MIPL (OPGS) at JPL. It can apply to any of the camera instruments, but only the RAD (and RAL) type is generated. MIPLRAD first backs out any onboard flat field that was performed. It then applies the following corrections: flat field, exposure time, temperature-compensated responsivity. The result is calibrated to physical units for MER of W/m^2/nm/sr. MIPLRAD is a first-order correction only and should be considered approximate.
- XYZ RDR An XYZ file contains 3 bands of 32-bit floating point numbers in the Band Sequential order. Alternatively, X, Y and Z may be stored in separate single-band files.
The pixels in an XYZ image are coordinates in 3-D space of the corresponding pixel in the reference image. This reference image is traditionally the left image of a stereo pair, but could be the right image for special products. The geometry of the XYZ image is the same as the geometry of the reference image. This means that for any pixel in the reference image the 3-D position of the viewed point can be obtained from the same pixel location in the XYZ image. The 3-D points can be referenced to any of the MER coordinate systems (specified by DERIVED_IMAGE_PARAMS Group in the PDS label). Most XYZ images will contain 'holes', or pixels for which no XYZ value exists. These are caused by many factors such as differences in overlap and correlation failures. Holes are indicated by X, Y, and Z all having the same specific value. This value is defined by the MISSING_CONSTANT keyword in the IMAGE object. For the XYZ RDR, this value is (0.0,0.0,0.0).
Software
MER Camera Payload downlink processing software is focused on rapid reduction, calibration, and visualization of images in order to make discoveries, to accurately and expeditiously characterize the geologic environment around the rover, and to provide timely input for operational decisions concerning rover navigation and Instrument Deployment Device (IDD) target selection. Key software tools have been developed at Cornell University, at JPL by the MIPL, SSV, and APSS groups, at NASA Ames, and at the USGS/Flagstaff. These tools can also be used to process MI images (see below), as well as Navcam and Hazcam images, which have substantial scientific potential in addition to their operational importance.
PDS-labeled images and tables can be viewed with the program NASAView, developed by the PDS and available for a variety of computer platforms from the PDS web site. There is no charge for NASAView.
Media/Format
The data set will initially be delivered and kept online. Upon Mission completion, the Microscopic Imager RDRs will be delivered to PDS on DVD.
MERx MI MERGED FOCAL SECTIONS AND ANAGLYPH STEREO IMAGES
These data consist of merged focal section images and anaglyph stereo images in TIFF and JPEG formats, and were generated by the MI Team at the U. S. Geological Survey, Flagstaff Arizona. Not all files were generated for all targets.
Note that shadow boundaries often affect the quality of MI focal merges and anaglyphs, producing artifacts and spurious features along the shadows. The software used to generate these products is described in the publication Overview of the Microscopic Imager Investigation during Spirit’s first 450 sols in Gusev crater (Herkenhoff, et al., 2006, JGR v111, doi:10.1029/2005JE002574).
Data Set Overview
The data consist of merged focal section images and anaglyph stereo images in TIFF and JPEG formats, organized into subdirectories by sol number. These data were generated by the MI Team at the U. S. Geological Survey, Flagstaff Arizona. The remainder of this document describes how these images were processed.
Over most targets observed by the MI, stacks of 3, 5 or 7 images were acquired to assure the focus of all segments of the scene in one or another image. These images were taken at different distances from the target and hence have different scales. When possible, these "focal sections" were merged into a single image that shows all parts of the target in good focus. High-quality focal section merges could be produced only when the entire surface of the MI target was observed in good focus in at least one of the images in the stack. Therefore, focal section merges could not be generated for all MI observations. Images that were completely out of focus have no depth information, and images with moving shadows confused the software to varying degrees. In general, any artifact that resulted in a sharp boundary within a scene that varied from image to image would result in artifacts in the output. In many cases, imaging the MI shadow was unavoidable.
The focal merging concept is straightforward: Out-of-focus images of a scene have less high spatial frequency information than in-focus images of the same scene. The actual amount of high frequency information varies across the scene and among scenes; there is no threshold that distinguishes in-focus from out-of-focus. For each neighborhood (any region several pixels across) within a scene, each of the images was considered. The image that had the largest high frequency component was judged to be the best-focus for that neighborhood. The 3-dimensional position was then defined based on the pixel coordinates for the neighborhood and the known depth to best focus. The 3-dimensional position was refined by a polynomial fit in depth as described below, resulting in depth resolution less than the sampling interval.
MI EDRs were used as the input to the focal section merging process, which make made use of IDL software created by Mark Lemmon (Texas A&M). The raw digital numbers (DN) were divided by exposure time, resulting in somewhat uniform brightness levels despite possible small changes in exposure time due to auto-exposure. Without any reprojection, the full 1024 x 1024 image was passed through a simple high-pass filter by dividing the image by a smoothed version of the image (11 x 11 pixel boxcar average) and subtracting unity. The high-pass image was the basis for determining depth. Dividing the image by the smoothed image eliminated effects of large scale illumination variations, but did not eliminate effects of moving shadow edges. When possible, images were obtained entirely in shadow. For many images, this was not possible, and the moving shadow caused local artifacts in the subsequent processing.
Next, tie points were selected manually on features that were obvious in multiple images. Each selected tie point was refined using a local feature matching algorithm so that the relative position of features in the different images was known to at least one-pixel accuracy. The relative positions of the tie points were used to determine the relative altitudes of the MI in the various images (through variation in image scale) and any twist around the optical axis (through image rotation) caused by the fact that the IDD has 5 degrees of freedom, not 6. Generally, the IDD motion included very little twist. The full 6-degree of freedom position and orientation of the MI was actually determined from the tie points, to correct small errors in the actual vs. planned position of the IDD turret. All images were then rescaled with bi-cubic convolution interpolation such that tie points are aligned with their location in the top image of the stack (or a different user-selected image). The rescaling was done separately for both the set of raw (DN/sec) images and high-pass images.
The processing proceeded in two stages. A first approximation of depth was determined by simply looking up, for each pixel in the aligned images, which image had the largest absolute magnitude of high-frequency component. Pixels were not compared to other pixels, as the true high-frequency component varied across the scene. But for a specific pixel, the image in which the local high frequency component was maximized was taken to be the image when that pixel was in focus. The depth for each pixel was then set to be the altitude corresponding to the in-focus image (the constant offset for pupil-to-target distance for best focus was ignored, as all depth information was treated as relative within the same scene). At the end of this stage, an in-focus image was created by assembling the best-focus raw image value (DN/sec) for all pixels into a new image.
The first stage resulted in a depth map at the resolution of IDD motion, with typically 3 to 7 different altitudes used for one focal series and 3 mm steps. The amount of high frequency information was generally a smooth function of altitude (i.e., the depth of field is fairly well sampled). The second stage increased the depth resolution by going back to each pixel and performing a second order polynomial fit to find the altitude of best focus. Because each pixel in the high pass filtered version had some information from neighboring pixels and because of inherent noise in the process, a 5x5 pixel median filter was used to eliminate outliers, and the depth map was smoothed with a 15x15 pixel boxcar average. The final horizontal resolution in the depth map is therefore near 15 pixels, or 0.46 mm. Repeat image sequences of the same target suggest depth repeatability is ~1/5 the step size, or ~0.6 mm. Finally, the in-focus image was updated by polynomial interpolation of the raw images in depth to the best-focus position.
The primary output of this procedure was a merged image that is, in principle, all in focus. This image was saved as a TIFF file named "<id> _raw.tif", where <id> is a text identifier with the following format: R###_target, where R is the rover (A for Spirit, B for Opportunity), ### is the 3-digit sol number, and target is the target name. The first ancillary product was a depth map, saved as "<id>_dem.tif" and "<id> _dem.txt". The text file contains scaling factors that convert the 0-255 range within the TIF to elevations in mm (with an arbitrary zero point). The depth map was then used to project the image into synthetic left and right eye views, archived as "<id>_[RL].jpg". The projection was a simple shift of pixels values left or right depending on depth. The magnitude of the shift was dynamic, such that the full range of depths in the image was displayed. The projection resulted in variations in vertical exaggeration, so the DEM files should be used rather than the anaglyphs for quantitative assessment of topography. The right and left eye views were then combined into an anaglyph for stereo viewing, "<id>_ana.jpg".
MERx-M-MI-2-EDR-OPS-V1.0
MERx MARS MICROSCOPIC IMAGER EDR OPS V1.0
The MER MI EDR data set...
Data Set Overview
This data set contains data to help understand the fine-scale morphology, reflectance, and texture of rock surfaces and soil as well as the accumulation of dust on the capture and filter magnets. Several types of imaging data products can be created onboard the rover. Image data volume can be reduced by summing rows or columns, subframing (or windowing), or downsampling. Because the goal of MI observations is to resolve small features on Mars, row or column summing is not likely to be performed on MI images. However, subframing (selecting a part of the image for downlink) and/or downsampling (calculating a mean or median of pixels in specified blocks) can be used to reduce MI data volume for downlink. Subframe products are defined by starting row and column and by number of rows and columns. Downsampling can be used to create a thumbnail version of an image for rapid downlink and assessment on the ground. If the thumbnail indicates that the image is of scientific interest, the full-resolution image can be later returned to Earth. A histogram of the image data can also be generated and returned to Earth as a separate product. Reference pixels are returned as a separate product if requested.
Processing
This data set uses the Committee on Data Management and Computation (CODMAC) data level numbering system. The MER Camera Payload EDRs are considered Level 2 or Edited Data (equivalent to NASA Level 0). The EDRs are reconstructed from Level 1 or Raw Data, which are the telemetry packets within the project specific Standard Formatted Data Unit (SFDU) record. They are assembled into complete images, but are not radiometrically or geometrically corrected.
Microscopic Imager EDR data products were generated by the Multi-mission Image Processing Lab at the Jet Propulsion Laboratory using the telemetry processing software mertelemproc. The EDRs produced are raw uncalibrated data reconstructed from telemetry packet SFDUs and formatted according to the Camera EDR/RDR Software Interface Specification. Meta-data acquired from the telemetry data headers and a meta-data database were used to populate the PDS label. There will not be multiple versions of a MER Camera Payload EDR. Missing packets will be identified and reported for retransmission to the ground as partial datasets. Prior to retransmission, the missing EDR data will be filled with zeros. The EDR data will be reprocessed only after all partial datasets are retransmitted and received on the ground. In these cases, the original EDR version will be overwritten. The EDR data product will be placed into FEI for distribution.
Data
The data packaged in the camera data files will be decoded, decompressed camera image data in single frame form as an Experiment Data Record (EDR). The Full Frame form of a standard image data file has the maximum dimensions of 1024 lines by 1024 samples.
- Full Frame EDR Full Frame EDRs are stored as 16-bit signed integers. If 12-to-8 bit scaling is performed, then pixels are stored in 16-bit format and only the last 8 bits of eh 16-bit integer are used.
- Thumbnail EDR Thumbnail EDRs are stored as 16-bit signed integers or 8-bit unsigned integers. If 12-to-8 bit scaling is performed, then pixels are stored in 16-bit format and only the last 8 bits of the 16-bit integer are used. The Thumbnail EDR is a sized down version of the original acquired image (i.e., camera returned pixel data), and size of the binary EDR image data is variable. However, the original acquired image is not always downlinked. The main purpose of a Thumbnail EDR is to provide an image summary using a very low data volume compared to the original image.
- Sub-frame EDR Sub-frame EDRs are a subset of rows and columns of the 1024 x 1024 full frame image. Sub-frame EDRs are stored as 16-bit signed integers. If 12-to-8 bit scaling is performed, then pixels are stored in 16-bit format and only the last 8 bits of the 16-bit integer are used.
Software
MER Camera Payload downlink processing software is focused on rapid reduction, calibration, and visualization of images in order to make discoveries, to accurately and expeditiously characterize the geologic environment around the rover, and to provide timely input for operational decisions concerning rover navigation and Instrument Deployment Device (IDD) target selection. Key software tools have been developed at Cornell University, at JPL by the MIPL, SSV, and APSS groups, at NASA Ames, and at the USGS/Flagstaff. These tools can also be used to process MI images (see below), as well as Navcam and Hazcam images, which have substantial scientific potential in addition to their operational importance.
PDS-labeled images and tables can be viewed with the program NASAView, developed by the PDS and available for a variety of computer platforms from the PDS web site. There is no charge for NASAView.
Media/Format
The data set will initially be delivered and kept online. Upon Mission completion, the Microscopic Imager EDRs will be delivered to PDS on DVD as part of the complete MER EDR data set.
MERx-M-MI-5-ANAGLYPH-OPS-V1.0
MERx MARS MICROSCOPIC IMAGER ANAGLYPH RDR OPS V1.0
The MER MI Anaglyph data set consists of radiometrically decalibrated, camera model corrected, and/or geometrically altered raw camera data acquired by the MI on the Mars Exploration Rovers. For details, see Stereo Anaglyph data set description.
MERx-M-MI-3-ILUT-OPS-V1.0
MERx MARS MICROSCOPIC IMAGER INVERSE LUT RDR OPS V1.0
The MER MI Inverse LUT data set is comprised of radiometrically decalibrated, camera model corrected, and/or geometrically altered raw camera data acquired by the MI on the Mars Exploration Rovers. For details, see Inverse Look-up Table data set description.
MERx-M-MI-4-LINEARIZED-OPS-V1.0
MERx MARS MICROSCOPIC IMAGER LINEARIZED RDR OPS V1.0
The MER Linearized data set is comprised of radiometrically decalibrated, camera model corrected, and/or geometrically altered raw camera data acquired by the MI on the Mars Exploration Rovers. For details, see Linearized data set description.
MERx-M-MI-5-MOSAIC-OPS-V1.0
MERx MARS MICROSCOPIC IMAGER MOSAICS RDR OPS V1.0
The MER Mosaic Images data set contains single RDR products made up from multiple Hazcam frames mosaicked together. For details, see Mosaic Images data set description.
MERx-M-MI-3-RADIOMETRIC-OPS-V1.0
MERx MARS MICROSCOPIC IMAGER RADIOMETRIC RDR OPS V1.0
The MER Radiometrically Corrected Image data set is comprised of radiometrically corrected RDR products from any of the camera's instruments, used to meet time constraints imposed by rover planners in traverse planning work. For details, see Radiometric Corrections data set description.
see ALSO