# showapi.level0¶

Source code:: showapi.level0

This module is used to read and manipulate SHOW level 0 data. The general processing flow is

1. Create an instance of SHOWLevel0 for the desired instrument and desired data folders.
2. Use the instance of SHOWLevel0 to create an instance L0Algorithms.
3. Apply processing algorithms from methods of both L0Algorithms and SHOWLevel0 to manipulate the images in the SHOWLevel0 object.

The SHOWLevel0 object sorts all images in to an internal dictionary keyed by exposure time expressed in micro-seconds. For each unique exposure time there is one instance of class Level0_ImageCollection that stores all the images at that exposure time. Conceptually, each image record is stored in class Level0_ImageCollection as a list of Level0_Image which can be indexed or iterated over using standard Python constructs.

## SHOWLevel0¶

A class to manage Level 0 file I/O and a few processing operations. SHOW Level 0 objects are created for specific instruments and will load configuration parameters accordingly. The class implements indexing and iterating options for easier implementation. See the Examples at the bottom of the page

class showapi.level0.showlevel0.SHOWLevel0(instrumentname: str, netcdf_filename: str, groupname: str)[source]
__init__(instrumentname: str, netcdf_filename: str, groupname: str)[source]

Constructs the SHOW level 0 object for the given instrument and optionally load in images from one or more directories

Parameters: instrumentname – The name of the instrument configuration. This is used to lookup a yaml configuration file stored in the config folder. The configuration file will indicate the file format to be be used to read the images. Valid values include timmins_2014 for data collected by the original SHOW instrument used for the Timmins 2014 balloon flight er2_2017 for the SHOW instrument using the XIPHOS telemetry system developed for the ER2 flight in 2017. See notes on XIPHOS directory format below uofs_dec2016 for images collected with the EPIX framegrabber at Univ of Sasktachewan in December 2016 dirnames – default is None. If not None then dirnames will be either one list_of_files or a list of directories. The new instance will load images from all of the directories. Each list_of_files is specified as a string. If dirnames is None then no images are loaded and the user may call read_level0() at a later time to read in images. collection – default is None. If not None then SHOW Level 0 object will be assigned this collection of images
algorithms() → showapi.level0.l0algorithms.L0Algorithms[source]

Fetch the Level0 algorithm object associated with this Level 0 object

Returns: A L0Algorithms object.
average_images(index: typing.List[int] = None, darkcurrent: typing.Dict[float, showapi.level0.showlevel0.L0ImageStats] = None, flatfield: numpy.ndarray = None) → showapi.level0.showlevel0.L0ImageStats[source]

Averages all of the images in the collection and returns the average, standard deviation and error. Note that you will probably want to sort data by exposure time first as we do not to divide by exposure time

Parameters: darkcurrent – a dictionary of dark current statistics for differnt exposure times. The dark current will be found that matches each images exposure time and subtracted form the image befor ethe image is averaged etc. Default is None which means no dark current correction is applied. three element tuple[ average image and header, error image, standard deviation of image]. Tuple[ Level0_Image, np.ndarray, np.ndarray ]
image_at(index: typing.Union[int, typing.List[int]], dtype='f8') → numpy.ndarray[source]

Returns the image at locations given by index as a floating point array

Parameters: index –
instrument_name() → str[source]

Return the instrument name of this Level 0 object

make_dark_current(netcdf_filename: str, groupname: str) → typing.Dict[float, showapi.level0.showlevel0.L0ImageStats][source]

Creates an internal dark current, L0ImageStats, object from the images in the list of directories. A dark current entry is created for ewach unique exposure time. The user is responsible for ensuring that all the images in each list_of_files have good dark current images. No attempt is made to detect and eliminate outlier problems. Note that it can take several minutes to process the directories if they contain hundreds or thousands of images.

Parameters: dark_dirnames – Either a string or a list of strings. Each string is the name of a dark current image list_of_files used to make the dark current statistics. None
make_flat_field(arma_netcdf_filename: str, arma_groupname: str, armb_netcdf_filename: str, armb_groupname: str, dark_stats: typing.Dict[float, showapi.level0.showlevel0.L0ImageStats]) → numpy.ndarray[source]

Creates an internal flat-field, L0ImageStats, object from the average of Arm A and Arm B white light images. A flat-field entry is created for each unique exposure time. Dark current is removed both Arm A and Arm B using the results from a previous call to make_dark_current. The user is responsible for ensuring that all the images in each list_of_files have good flat-field behaviour and that there are sensible amounts of Arm A and Arm B measurements (ie approx equal amounts). No attempt is made to detect and eliminate outlier problems. It can take several minutes to process the directories if they contain hundreds or thousands of images.

Parameters: arma_netcdf_filename – The name of the netcdf file containing raw, level 0, arm A images. arma_groupname – The name of the group inside the netcdf file containing the raw, level 0, arm A images. armb_netcdf_filename – The name of the netcdf file containing raw, level 0, arm B images. armb_groupname – The name of the group inside the netcdf file containing the raw, level 0, arm B images. dark_stats – The dark current image statistics. The dark current stats should have exposure times (and temperatures which are not checked) that match the exposure times of the flat-field images. np.ndarray
unique_integration_times() → typing.List[float][source]

Returns the list of unique integration times in microseconds in this Level 0 object

Returns: the list of integration times

## L0Algorithms¶

A collection of algorithms that can be applied to Level0 data

class showapi.level0.l0algorithms.L0Algorithms(parameters: show_config.show_configuration.Parameters) → None[source]
apodized_interferogram_with_zero_bias(x: numpy.ndarray, y: numpy.ndarray) → numpy.ndarray[source]

Apply an apodization function, y, to the each height row of the interferogram, x. Apply a dc offset to the interferogram, x, such that the average of the product of the interferogram, x, and the apodization function, y, is zero. This eliminates the large zero order component bleeding into nearby frequencies after we perform the FFT.

Note that removing DC bias from the interferogram before apodization is applied results in a slight non-zero dc component that makes a quite large spike in the zero harmonic which bleeds into neighbouring frequencies. This technique removes that bias and helps avoid bleed through of the zero harmonic in the fft spectrum.

Parameters: x – Original interferogram, a 2d-array of size (H,M). We need to average over M y – a 2d array of size (H,M). We need to average over M the apodized integrforogram with zero bias for each height row, 2-d array (H,M)
fit_cosine_with_gaussian_envelope(krdata_subwindow: numpy.ndarray)[source]

Fits a cosine with a gaussian enevelope to each row of a sub-windowed interferogram using a least squares approach

Parameters: krdata_subwindow – A 2-D interferogram sub-window of size (H,M) where H is the number of height rows and M is the number of interferogram bins. The cosine and gaussian enevelope is fitted to each row of the sub-window. a 2-D array of size (6,H) where H is the number of heightrows and 6 is the number of fitted parameters.

Theory: The code fits the following function to the measured interferogram,

$y = A\cos (\omega x + \phi)e^{-\frac{(x-x_0)^2}{2\sigma^2} } + K$

where $$x$$ is the inteferogram bin number and we fit for the following 6 variables,

1. $$A$$
2. $$\omega$$
3. $$\phi$$
4. $$K$$
5. $$x_0$$
6. $$\sigma$$

The least squares algorithm uses the analytic differentials to find the best fit parameters. Let

$z = A\sin (\omega x + \phi)e^{-\frac{(x-x_0)^2}{2\sigma^2}}$

then the differentials are given by

$\begin{split}\frac{\partial y}{\partial A} &= \frac{y-K}{A} \\ \frac{\partial y}{\partial \omega} &= -zx \\ \frac{\partial y}{\partial \phi} &= -z \\ \frac{\partial y}{\partial K} &= 1 \\ \frac{\partial y}{\partial x_0} &= (y-K)\frac{(x-x_0)}{\sigma^2} \\ \frac{\partial y}{\partial \sigma} &= (y-K)\frac{(x-x_0)^2}{\sigma^3} \\\end{split}$
interferogram_subwindow(level0: numpy.ndarray, userdefinederror: numpy.ndarray = None) → typing.Tuple[[numpy.ndarray, numpy.ndarray], typing.Tuple[[int, int, int], int]][source]

Fetch the interferogram image and error arrays sub-windowed into the useful detector area.

Parameters: level0 – The incoming Level0_Image record. userdefinederror – Default None. If not None then derive the error on the record’s image from this array rathe rthan the error field stored in the record A three element tuple containing, (i) the sub-windows image, (ii) the sub-windowe error and (iii) the detector bounding region as (x0,x1,y0,y1).
interferogram_to_spectrum(level0: numpy.ndarray, userdefinederror: numpy.ndarray = None) → typing.Tuple[[numpy.ndarray, numpy.ndarray, numpy.ndarray], numpy.ndarray][source]

Converts the interferogram to spectra by applying an apodization (Hanning) window and taking the FFT at each selected height level. A sub-window (see below) is selected to create clean interferograms of dimension (H,M). The clean interferogram is apodized and fourier transformed to make a spectral 2-D image of size (H, M/2+1), ie we remove frequencies above the nyquist limit. The new spectral image is returned as a Level1_Spectra object.

The code selects a sub-window as defined by the instrument parameters fft_window_start, fft_window_end, height_window_start and height_window_end. This sub-window is meant to eliminate all the useless edge areas of the detector. There are no requirements on the size of the sub-window, for example it does not have to be a power of 2 etc. Users should eliminate bad and marginal regions near the edges of the detector with the sub-window as this is much better leaving them in where they ultimately corrupt the entire signal.

The code removes a DC bias from the original intefreogram such that the zero order harmonic of the FFT is zero. There is a subtlety in this process as we remove a DCbias from the originl signal so the average of the product of the orginal signal times the apodization function is zero. This is not quite the same as ensuring the average of the original signal is zero. If this correction is not made then we see the zero harmonic get quite large and it bleeds out into neighbouring spectral pixels during the FFT.

Parameters: level0 – A level 0 record, Level0_Image. This contains both the image which is transformed and a header. Both header and transormed image are copied to the level 1 spectral object, Level1_Spectra. By default an error estimate of the transform is made from the error estimate in the Level 0 record. The error estimate can be overridden using a user defined error estimate userdefinederror – Default = None. A user defined 2-d array specifying the error on the Level 0 interferogram image. If provided this array will be used in the error analysis and propagation instead of the Level 0 records error field. 2-d image. A three element tuple containing (i) the desired Level1_Spectra object, (ii) the windowed and apodized interferogram, array(H,M) and (iii) the windowed but not apodized interferogram, array (H,M)
rms_frequency_error_from_rms_spatial_error(spatial_error_array: numpy.ndarray, include_apodization=True, real_component_only=False) → typing.Tuple[numpy.ndarray, numpy.ndarray][source]

Calculates the theoretical root mean square error in Fourier transform space given the root mean square error in interferogram space. The algorithm is applied to each height row independently. For each height row only one value, the RMS error is returned.

Parameters: spatialerrorarray – 2D array (H,M) of errors for M points at each height (H) row. include_apodization – Default True. If True then divide theoretrical RMS error by a factor of 2 real_component_only – Default False. If True then divide theoretical RMS errror by sqrt(2) the theoretical root mean square error at each height row. An array (H)
rms_signal_error_from_detector_specs(dnsignal: numpy.ndarray) → numpy.ndarray[source]

Fetch the theoretical error on the signal read from the detector in DN. Considers Poisson counting statistics on total number of electrons an dthe detector readout noise.

Parameters: dnsignal – The signal readout from the detector. Note that you may have to subtract any DC bias ont the detector before calling this routine The calculated error in DN.
subwindow_array(image: numpy.ndarray) → numpy.ndarray[source]

Fetch this instruments sub-window from a given image.

Parameters: image – An incoming 2-D array which will be sub-windowed. It is assumed it is in the same shape as the detector The sub-windowed image as a 2-D array

## Level0_Image¶

A level 0 record.
class showapi.level0.level0imagearray.Level0_Image[source]

Instance Attributes:

mjd

The UTC time of the image represented as a modified julian date. Float.

exposure_time

The exposure time in micro-seconds seconds. Float.

sensor_temp

The detector temperature in Celsius. Float.

sensor_pcb_temp

The detector printed circuit board temperature in Celsius. Float.

top_intferom_temp

The temperature in Celsius of the top of the interferometer. Float.

bottom_intferom_temp

The temperature in Celsius of the bottom of the interferometer. Float

optobox_temp

The temperature in Celsius of the optics box. Float

Q7_temp

The temperature in Celsius of Q7. Float

high_gain

The detector gain setting. The value is True for high gain and False for low gain.

comment

An optional comment string for each image. The comment is typically supplied by the operator during data acquisition.

image

The Level 0 interferogram image expressed as a numpy 2-D float array of dimension (H,M) where H is the number of height rows and M is the number of interferogram columns. Depending upon context the image may represent either a raw 2-D image read directly from disk or some other processed level 0 image, e.g. sub-windowed or calibrated image.

error

The error on the Level 0 interferogram image. The error field may be None meaning no error value is available. If it is not None then it will be a numpy 2-D float array of dimension (H,M) where H is the number of height rows and M is the number of interferogram columns. It will be the the same size as the image attribute.

## Level0_ImageCollection¶

A collection of Level 0 records. The class implements indexing and iterating options for easy implementation. The class only loads image headers when scanning a directory and defers loading image data until required. This works well if you scan through the array of images once but is not so good if you scan through the array multiple times.
class showapi.level0.level0imagearray.Level0_ImageCollection[source]
A class to support loading and maniplulating a collection of SHOW level 0 images. The general concept is to
• select, modify, delete various headers
• only read image data when specifically requested for a specific record, see at()

Users can access header fields either via the attributes o fthe class as (numpy) arrays or via individual records using at() which returns the image and header for one record as an instance of showapi.level0.level0imagearray.Level0_Image.

Instance Attributes:

mjd

The UTC time of each image represented as a modified julian date. This variable is a numpy 1-D array of floating point.

exposure_time

The exposure time in seconds of each image. This variable is a numpy 1-D array of floating point.

sensor_temp

The detector temperature in Celsius for each image. This variable is a numpy 1-D array of 32 bit floating point. The OWL640 sensor_temp is normally derived from interpolation of the telemetry stream.

sensor_pcb_temp

The detector printed circuit board temperature in Celsius for each image. This variable is a numpy 1-D array of 32 bit floating point. The OWL640 sensor_pcb_temp is normally derived from interpolation of the telemetry stream.

top_intferom_temp

The temperature in Celsius of the top of the interferometer for each image. This variable is a numpy 1-D array of 32 bit floating point. The OWL640 top_intferom_temp is derived from interpolation of the telemetry stream.

bottom_intferom_temp

The temperature in Celsius of the bottom of the interferometer for each image. This variable is a numpy 1-D array of 32 bit floating point. The OWL640 bottom_intferom_temp is derived from interpolation of the telemetry stream.

optobox_temp

The temperature in Celsius of the optics box for each image. This variable is a numpy 1-D array of 32 bit floating point. The OWL640 optobox_temp is derived from interpolation of the telemetry stream.

Q7_temp

The temperature in Celsius of Q7. This variable is a numpy 1-D array of 32 bit floating point. The OWL640 Q7_temp is derived from interpolation of the telemetry stream.

high_gain

The detector gain setting for each image. The value is True for a high gain expsoure and False for a low gain exposure. This variable is a numpy 1-D array of bool.

comment

An optional comment string for each image. The comment is typically supplied by the operator during data acquisition.

append(other: showapi.level0.level0imagearray.Level0_ImageCollection) → int[source]

Apppends the image records in the ‘other’ collection to this collection. Returns the number of images in this collection

Parameters: other – the image collection to append to this collection a new image collection with just the subset of images.

Create the collection of standard level 0 images from headers read in from the raw file. This will erase the current contents of the collection. This method is generally reserved for internal usage.

Parameters: headers – A list of dictionaries. There is one dictionary for every image read in from file. The keys in the dictionary follow undocumented internal Level 0 decoding conventions. Returns True if success
at(index: int, get_image: bool = True) → showapi.level0.level0imagearray.Level0_Image[source]

Fetches the specified instance of header and image from the collection of images. The code will only fetch the image itself if get_image is True as this is usually the slow part of the entire process.

Parameters: index – The zero based index of the required image. The code will raise a RuntimeError exception if the index is out of range get_image – Controls whether the image field is filled or left blank. The image field is properly filled if this parameter is True. The image field is set to None if this parameter is false. This is useful for fetching headers without the readimg and storing the images. Default is True The Level 0 image as an instance of class Level0_Image
average_images(darkcurrent: typing.Dict[float, showapi.level0.level0imagearray.ImageStats] = None, ff: typing.Dict[float, numpy.ndarray] = None) → showapi.level0.level0imagearray.ImageStats[source]

Averages all of the images in the collection and returns the average, standard deviation and error. Note that you will probably want to sort data by exposure time first as we do not to divide by exposure time

Parameters: darkcurrent – a dictionary of dark current statistics for differnt exposure times. The dark current will be found that matches each images exposure time and subtracted form the image befor ethe image is averaged etc. Default is None which means no dark current correction is applied. three element tuple[ average image and header, error image, standard deviation of image]. Tuple[ Level0_Image, np.ndarray, np.ndarray ]
numimages() → int[source]

Fetches the number of images in this collection of images

Returns: The number of images in the collection
slice(index: numpy.ndarray) → showapi.level0.level0imagearray.Level0_ImageCollection[source]

Slice the collection of images with the given indices and return the slice as a new collection.

Parameters: index – an array of indices typically generated by numpy.where or Level0_ImageCollection.where_exposuretime a new image collection with just the subset of images.
sortbytime() → showapi.level0.level0imagearray.Level0_ImageCollection[source]

Slice the collection of images with the given indices and return the slice as a new collection.

Parameters: index – an array of indices typically generated by numpy.where or Level0_ImageCollection.where_exposuretime a new image collection with just the subset of images.
split_by_commentfield() → typing.Dict[str, _ForwardRef('Level0_ImageCollection')][source]

Splits the current collection by the comment field. It creates a dictionary of Level0_ImageCollection sorted by the unique comment fields.

split_by_exposuretime() → typing.Dict[float, _ForwardRef('Level0_ImageCollection')][source]

Splits the current collection by exposure time. It creates a dictionary of Level0_ImageCollection sorted by exposure time.

unique_commentfield() → typing.List[str][source]

Fetch the unique comment fields in this collection :return: returns the list of unique comment fields

unique_exposuretimes() → numpy.ndarray[source]

Fetch the unique exposure times in this collection :return: returns the sorted array of unique exposure times :rtype: np.ndarray

where_comment(thiscomment: str) → numpy.ndarray[source]

Fetch the indices of comment fields in this collection that match the selected comment string. This method returns an array that can be passed to meth:~Level0_ImageCollection.slice to create a new collection of images with the same comment message.

Returns: returns the index of requested exposure times in this collection np.ndarray
where_exposuretimes(exposuretime: float) → numpy.ndarray[source]

Fetch the indices of exposure times in this collection that match the selected exposure time. This method returns an array that can be passed to Level0_ImageCollection.select to create a new collection of images with fixed exposure time.

Returns: returns the index of requested exposure times in this collection np.ndarray

## Level1_Spectra¶

A level 1 record containing the SHOW-SHS spectrum
class showapi.level0.l0algorithms.Level1_Spectra[source]

Instance Attributes:

mjd

The UTC time of the image represented as a modified julian date. Float.

exposure_time

The exposure time in micro-seconds seconds. Float.

sensor_temp

The detector temperature in Celsius. Float.

sensor_pcb_temp

The detector printed circuit board temperature in Celsius. Float.

top_intferom_temp

The temperature in Celsius of the top of the interferometer. Float.

bottom_intferom_temp

The temperature in Celsius of the bottom of the interferometer. Float

optobox_temp

The temperature in Celsius of the optics box. Float

Q7_temp

The temperature in Celsius of Q7. Float

high_gain

The detector gain setting. The value is True for high gain and False for low gain.

comment

An optional comment string for each image. The comment is typically supplied by the operator during data acquisition.

spectrum

The Level 1 spectral image expressed as a numpy 2-D float array of dimension (H,M) where H is the number of height rows and M is the number of transform frequencies, typically half the number of interfrogram pixels.

error

The error on the Level 1 spectrum . The error field may be None meaning no error value is available. If it is not None then it will be a numpy 2-D float array of dimension (H,M) where H is the number of height rows and M is the number of transform fequencies. It will be the the same size as the spectrum attribute.

ifgram_bounds

The sub-window bounds used to select the useful detector area from the original interferogram. This is a four elelemnt tuple (x0,x1,y0,y1)

## ImageStats¶

A namedtuple that holds statistics on a collection of Level 0 images,
class showapi.level0.level0imagearray.ImageStats(average, error, stddev, median)

A simple namedtuple to hold image statistics.

average

The average of all the images on a pixel by pixel basis. The average is an instance of showapi.level0.level0imagearray.Level0_Image so you have to access to averaged header information as well as theimage. Access statsobj.average.image if you want to get at the numpy 2-d image

stddev

The standard deviation of all images expressed as a numpy 2-D array

error

The error of all images expressed as a numpy 2-D array. This is typically the standard deviation devided by the square oot of the number of images.

median

The median value of all images that contributed to the statistics, This can be useful if you think the statistics are being skewed by a bad image.

## ComplianceTests¶

The class that maanges the SHOW compliance tests
class showapi.compliancetests.compliancetests.ComplianceTests(instrumentname: str = 'er2_2017') → None[source]
analyze_configuration_A(netcdf_filename: str, krypton_group: str, dark_group: str, arma_group: str, armb_group: str, plot_height_pixels: typing.Tuple[[int, int], int] = (10, 75, 140))[source]

Analze the configuration A.

Parameters: krypton_directory – dark_directory – arma_directory – armb_directory – plot_height_pixels –
analyze_configuration_B(netcdf_filename: str, white_groupname: str, dark_groupname: str, arma_groupname: str, armb_groupname: str, littrow_wavenum: float = 7336.016707, fwhm_wavenum: float = 0.268485, plot_height_pixels: typing.Tuple[[int, int], int] = (12, 75, 135), plot_h2o_xsection: bool = False) → bool[source]

Executes compliance tests for instrument configuration B. This is a white light configuration and is used to perform the spectral range validation. The spectral range is limited by an interference filter from 1363 nm to 1366 nm. This test is really a check on the filter to be sure that it has not delaminated and that the filter bandpass still covers a spectral range that includes water lines that we expect to see.

This test provides a quick method to observe white light spectra. These spectra if taken under normal laboratory conditions should exhibit water absorption features. The test provides the option for the use to plot water cross-section alongside the spectra.

We also apply a signal-to-noise check on the system.

Parameters: exposuretime_usecs – white_directory – dark_directory – arma_directory – armb_directory – littrow_wavenum – fwhm_wavenum – plot_height_pixels –
analyze_configuration_C(dark_directories: typing.Union[str, typing.List[str]])[source]
This function implements the Benchmark: Dark Current and Bias described in SHOW
compliance Test and Performance Benchmarking document. We expect to perform this test with 5 unique exposure times ( ~0, 100, 200, 300 and 400 mili-seconds). However the fit algorithm will work properly with 2 or more unique exposure times. The plotting code only works properly with 6 or less unique exposure times.
Parameters: dark_directories – A list of directories containing (only)dark current images. The code will read in all images from the list of directories and sort them by exposure time. It is the callers responsibility to ensure that all the images are only dark current. True
analyze_contrast(algo: showapi.level0.l0algorithms.L0Algorithms, krdata: numpy.ndarray)[source]

Evaluates the difference between a cosine function modulated by gaussian envelope and measured spectrum.

Parameters: algo – krdata –
analyze_contrast_zero_crossing(algo: showapi.level0.l0algorithms.L0Algorithms, krdata: showapi.level0.level0imagearray.Level0_Image)[source]

zero crossing docs

Parameters: algo – krdata –
instrument_name()[source]

Returns the instrument name, usually er2_2017 :return:

make_spectralaverage_from_L0collection(L0: showapi.level0.showlevel0.SHOWLevel0, algorithm: showapi.level0.l0algorithms.L0Algorithms, darkcurrent=None, ff=None, imageindex=None)[source]

Make the FFT from each element of an array of interferograms . Get the average and the standard deviation

Parameters: L0 – algorithm –
test_detector_noise(white_light_directories: typing.Union[str, typing.List[str]], dark_directories: typing.Union[str, typing.List[str]])[source]
This function implements the Benchmark: Detector Noise described in the SHOW
Compliance Test and Performance Benchmarking document.
Parameters: dark_directories – A list of directories containing (only)dark current images. The code will read in all images from the list of directories and sort them by exposure time. It is the callers responsibility to ensure that all the images are only dark current. True

## Examples¶

A few examples

### Example 1, Simple Iteration¶

Iterating over the exposure times in SHOWLevel0 and the images in Level0_ImageCollection using old-school, simple indexing:

import argcommon.mjd
import showapi.level0

l0 = showapi.level0.SHOWLevel0('er2_2017', r'C:\sdcard0\images\2016-11-28_16-27')           # load in all images in this directory
for exposuret in l0.integration_times():                                                    # Iteratte over the list of exposure times in the SHowLevel0 object
imagecoll = l0[exposuret]                                                               # Get the list of images for this exposure time
for i in range( len(imagecoll) ):                                                       # for each image in the collection
record  = imagecoll[i]                                                              # get the image record using the numerical index
mjd = argcommon.mjd.MJD(record.mjd)                                                 # print stats on this record
print("Time                   = %s" % (mjd.AsUTCStr(True)))
print('Exposure time          = %8.3f ms' % (record.exposure_time / 1000.0,))

### Example 2, Modern Iteration¶

Exactly the same functional code as example 1 but recast into a more modern iterating style using the iterator support built into the objects:

import argcommon.mjd
import showapi.level0

l0 = showapi.level0.SHOWLevel0('er2_2017', r'C:\sdcard0\images\2016-11-28_16-27')           # load in all images in this directory
for imagecoll in l0:                                                                        # Iterate over the collection of images at each exposure time
for record in imagecoll:                                                               # for each image in the collection
mjd = argcommon.mjd.MJD(record.mjd)                                                 # print stats on this record
print("Time                   = %s" % (mjd.AsUTCStr(True)))
print('Exposure time          = %8.3f ms' % (record.exposure_time / 1000.0,))

### Example 3¶

A more extended example that uses the modern iterating style of example 2 but also creates an instance of L0Algorithms and applies a method to the record before generating statistics. In summary this code execute sthe following steps,

1. Read a directory of Level 0 images using the er2_2017 instrument configuration.
2. Create the L0Algorithms object and select the useful sub-window imaging region.
3. Generate image min/max statistics
5. Plot the image data
import numpy as np
import matplotlib.pyplot as plt
import argcommon.mjd
import showapi.level0

l0 = showapi.level0.SHOWLevel0('er2_2017', r'C:\sdcard0\images\2016-11-28_16-27')           # load in all images in this directory
algol = l0.algorithms()                                                                     # get the level 0 algorithm object

N = 0                                                                                       # Reset the total number of images read in
for imagecoll in l0:                                                                        # Get the collection of images at each exposure time
N += len(imagecoll)                                                                     # Count the total number of images at this exposure
for record in imagecoll:                                                                # for each record at this exposure time
image, error, bounds = algol.interferogram_subwindow(record)                        # get the sub-window data, only keeps the active area of detector
maxsig = np.max(image)                                                              # get the maximum signal in the image
minsig = np.min(image)                                                              # get the minimum signal in the image
p2 = np.percentile(record.image, 2)                                                 # get the 2 percent percentile value
p98 = np.percentile(record.image, 98)                                               # get the 98 percent percentile value
#
mjd = argcommon.mjd.MJD(record.mjd)                                                 # print stats on this record
print("Time                   = %s" % (mjd.AsUTCStr(True)))
print("Min, Max, 2%%, 98%%      = %7.1f, %7.1f, %7.1f, %7.1f" % (minsig, maxsig, p2, p98))
print('Exposure time          = %8.3f ms' % (record.exposure_time / 1000.0,))
print('High Gain              = %d' % (int(record.high_gain, )))
print('Sensor Temp            = %7.2f C' % (record.sensor_temp,))
print('Set Point Temp         = %7.2f C' % (record.setpoint_temp,))
print('PCB temp               = %7.2f C' % (record.sensor_pcb_temp,))
print('Top Interferom Temp    = %7.2f C' % (record.top_intferom_temp,))
print('Bottom Interferom Temp = %7.2f C' % (record.bottom_intferom_temp,))
print('Opto Box Temp          = %7.2f C' % (record.optobox_temp,))
print('Q7 Temp                = %7.2f C' % (record.Q7_temp,))
print("--------------------------------------------------------------")

plt.imshow( image, clim=[p2,p98], origin='lower')                                   # plot this image, use the 2nd and 98 percentiles for color limits
plt.colorbar()                                                                      # show a colorbar
plt.show()                                                                          # and show the image. Stops execution until user deletes the figure
print()
print('Finished printing %d records'%(N,))

## XIPHOS directory format¶

The SHOW Level 0 file I/O code uses a custom decoder to read the XIPHOS/OWL640 files and directories generated by the SHOW-SHS instrument. The custom decoder expects the data directories to be laid out in a standard format and it is essential users maintain this structure. The directory structure follows the format:

.... sdcard0 --+-- images ------ 2016-11-28_16-27
|-- telemetry --- 2016-11-28_16-27

In this case, the user would select directory .... sdcard0/images/2016-11-28_16-27 as the directory given to the constructor of SHOWLevel0 when using an instrument name of er2_2017. The OWL640 decoding code will automatically step up two folders into sdcard0 and down two folders into directory telemetry/2016-11-28_16-27 to read the corresponding telemetry files. The OWL640 format linearly interpolates variables stored in the telemetry files to the time of each image and stores the result in the image header. Variables not found or not properly interpolated will be set to NaN.