The Messenger 108 (June 2002), p. 4-9
In the case of ESO, the Data Flow Operations Group in Garching (DFO, also frequently called QC Garching), provides many aspects of data management and quality control of the VLT data stream. One of the main responsibilities is to assess and control the quality of the calibration data taken, with the goal to know and control the performance of the VLT instruments. Information about the results of this process is fed back to Paranal Science Operations and to the ESO User Community via QC reports and web pages.
The constant flow of raw data from the VLT instruments splits into data streams for the science data and the calibration data. The calibration data stream has two major components:
This article describes the Quality
Control process for the four presently operational VLT instruments: FORS1+2,
ISAAC and UVES. This process will be extended and refined for the next
suite of instruments coming soon, VIMOS, NACO, and FLAMES, and ultimately
expanded to all VLT instruments.
Pipelines. Fundamental in the QC process is the use of automatic data processing packages, the pipelines. Without these, effective quality control of the huge amount of data produced by the Observatory would be impossible. In fact, the primary goal of the data reduction pipelines is to create calibration products and support quality control. Only after this comes the reduction of science data.
With the large-scale use of data processing pipelines, the Quality Control group has effectively also the function of assessing and improving the accuracy of the pipelines. As a by-product, we provide documentation about the pipeline functions from the user's point of view.
The usual day-to-day workflow of the QC scientists has as primary components:
The QC process. There is a natural three-floor pyramid in the QC process (Figure 1):
In practice, after some initial phase when indeed everything is inspected, one usually decides to switch to a 'confidence mode' where, say, only every third night is inspected in depth, while for the others the trending plots are consulted. This strategy is economic in case of very stable instrument performance, and mandatory with high data rates. Then the 'human factor', namely the possible level of concentration, ultimately limits complete product checks.
Figure 2 shows as an example the QC plot for the products of a UVES FORMATCHECK frame which is a technical calibration needed by the pipeline to find the spectral format. With an experienced eye, just a second is needed to know from this plot that everything is fine and under control.
QC checks are also done on Paranal, directly after frame acquisition. These on-the-spot inspections are of quick-look character and apply to both raw and product data. They are extremely important to check the actual status of the instrument, especially for those instruments like UVES which have the data transferred to Garching via airmail.
2. Derive QC1 parameters. Next step in the QC process is the extraction of QC parameters. These are numbers which describe the most relevant properties of the data product in a condensed form. Since they are in most cases derived through some data manipulation (e.g. by the reduction pipeline), they are called QC1 parameters. This distinguishes them from the QC0 parameters which mainly describe site and ambient properties like seeing, moon phase etc.
Across the instruments, there are always QC1 parameters describing the detector status, i.e. the read noise, the mean bias level, the rms of gain variations etc. Other QC1 parameters specific to spectroscopic modes are resolving power, dispersion rms, or number of identified lines. Imaging modes are controlled by QC1 parameters like zeropoints, lamp efficiency, and image quality.
3. Trending. The top level in the QC pyramid is the trending. Trending is a compilation of QC1 parameters over time, or a correlation of one QC1 parameter against another one. Trending can typically prove that a certain instrument property is stable and working as specified. It can do much more, however. For example, trending can discover the slow degrading of a filter, or aging effects of the detector electronics.
Outliers. Within the trending process, main attention focuses on two extremes: the outliers, and the average data points. Information theory says that outliers transport the highest information content. But not all outliers are relevant for QC purposes. We need to distinguish whether the outlier comes from a bad algorithm setup, from a bad instrument setup, or just from a bad operational setup.
A bad algorithm may be e.g. a wrong code for rms determination. Such outlier would help to improve the code. A bad instrument setup could be a stuck filter wheel with the filter vignetting the light path. A bad operational setup would be a frame claiming to be a FLAT, but the lamp was not switched on.In short, evaluation of the trending data is non-trivial and requires judgement.
Finding that a certain QC1 value is stable over months or years may lead to relax the acquisition rate of the corresponding calibration data. This may be a good idea since we should avoid over-calibration. But one has to bear in mind that for proving stability, one needs a good coverage in time, so it's a good idea to have calibrations done more frequently than their typical variation timescale.
Certification. Once a product file has been QC checked, and its QC1 parameters have been verified to be valid entries, the data enter into the delivery channel, which involves ingestion into the master calibration archive, usage for science reduction and distribution to the end users (if taken in Service Mode). By definition, the data are then certified. Rejected data are deleted.
UVES QC monitors the following instrument components (http://www.eso.org/qc/UVES/qc/qc1.html):
|detector||bias level, read noise, dark current; fringing|
|gratings||stability of spectral format; resolving power, precision of dispersion solution|
|lamps, filters||FF lamp stability, filter throughput|
the following QC items (http://www.eso.org/qc/FORS1/qc/qc1.html
with MOS being added eventually:
|General||bias level, read noise, dark current, gain, contamination|
|Imaging||zeropoints, colour terms, image quality|
|Long-Slit Spectroscopy||dispersion, resolution|
the following QC items (http://www.eso.org/qc/ISAAC/qc/qc1.html),
with the long-wavelength arm being added soon:
|General||dark level, read noise|
|Imaging (short arm)||Zeropoints|
The QC1 database can be considered as the central memory about the status of each VLT instrument. The goal is to have available all quality information from the complete operational history of the instrument. This also includes information about interventions (e.g. mirror recoating) and replacements (optical components, detectors). Such information is vital for interpretation of trending results. Moreover, with data collected over years, it becomes possible to detect slow degrading effects. Preventive interventions and maintenance can be scheduled properly.
Under the URLs:
you connect to the QC1 database and view trending plots and ASCII data (Fig. 3). Here you also find detailed documentation about the QC1 parameters.
We try to follow the philosophy to present knowledge rather than just information. Take as an example the trending of the UVES spectral resolving power R. We do not just dump all available numbers per date, but provide a documentation of the measurement process, a selection of trending plots, a correlation with slit width and a comparison to Users Manual values.
UVES filter degradation. The monitoring of the exposure level of the UVES flat-field lamps gives control over the lamp and filter status. The filter status is especially significant for the quality of science observations. In July 2001, the transmission of the blue CuSO4 filter dropped which was discovered in the trending plot (Figure 5). The replacement of the filter gave the blue efficiency of UVES a boost.
FORS1 image quality. Figure 6 combines input data from pipeline-processed SCIENCE images from FORS1. It demonstrates that in most cases FORS1 image quality is determined by the seeing and not degraded by potential errors like telescope guiding etc.
FORS1 zeropoints. Figure 7 shows the complete history of FORS1 zeropoints in the V band, spanning three years. Zeropoints measure the efficiency of the overall system instrument plus telescope. There have been major interventions (see the caption for details), but maybe more interesting is the fact that there is a general loss of efficiency by about 8% per year, due to degrading of the mirror surface.
ISAAC photometric zeropoints.
8 shows the zeropoints derived by the ISAAC pipeline for the whole
period 68. The sharp rise around MJD-OBS = 52,200 is due to an intervention
which included a re-alignment. This improved the instrument efficiency
by up to 0.2 mags. The long-term trend is due to efficiency degrading,
while the short-term scatter in most cases is due to fluctuations of the
On-site QC. Basic quality checks on the calibration data are performed by the Paranal daytime astronomer. Just after exposing the raw calibration data and pipeline-processing them into calibration products, the data are inspected visually. The on-line pipelines derive an essential set of QC1 parameters which is fed into a database. Essential are those QC1 parameters relating to fundamental instrument properties which, in case of failure, would jeopardize the usefulness of the science data. Such instrument health parameters are e.g. proper adjustment of gratings and filters, and proper CCD setup.
Off-line QC. The full set of quality checks is applied in Garching, anything which is not time critical, but requires in-depth analysis, pipeline or post-pipeline procedures. This applies also to complex trending analyses requiring extended data sets. Examples are photometric zeropoints which are determined from all standard star data of a night; colour and extinction terms being derived for a whole semester; efficiency curves; sky brightness etc.
Feedback loops. The exchange of quality information between the two sites with QC activities is especially important. The main feedback channel from QC Garching to Paranal are the web-published trending plots which are updated daily. These monitor the proper function of all QC-checked components. Any anomalies are investigated in detail and reported directly to the Paranal instrument responsible.
IOT. The main platform for exchanging
new results of trending studies, the development of new QC1 parameters
or algorithms are the Instrument Operating Teams (IOT). As a vertical structure,
these have, per instrument, delegates from each team which is essential
for proper operations, i.e. from Science Operations, User Support Group,
DFO, Pipeline Development, and Instrumentation. The teams are led by the
Instrument Scientist. It is here where all the expertise about a VLT instrument
is focused and where most efficiently the loop between QC results and improvement
of instrument performance is closed.
But in practice this is not even possible for simple instrument modes. For instance, the imaging mode of FORS1 has 5 standard filters with 4 CCD read modes and 2 different collimators. Obtaining a complete set of calibration frames, including twilight flats and standard stars, is practically not possible every night. As a more complex instrument, UVES has 12 standard setups, with roughly 20 different slit widths and 2 CCD modes, and the parameter space becomes forbiddingly large for routinely calibrating all settings.
Hence usually calibration data are triggered by the science setups actually used in the night before. To these are added the daily health check calibrations. On Paranal, an automatic tool is used which collects this information into the daytime calibration queue.
Even this strategy still produces a large calibration overhead both in terms of exposure time and archive disk space. So, after some initial epoch when confidence has been gained that the calibration plan is complete, one may start thinking how to optimize the plan. In the case of UVES, it has been shown by the trending studies that the most relevant instrument properties usually show trending timescales much longer than a few days. Based on this experience a three-day memory has been implemented in the calibration plan, with calibrations for an identical setup being repeated only every three days. The only exception is the wavelength calibration. The health check calibrations are executed daily in order to prove that nothing irregular happens, e.g. an earthquake which would clearly break the long-term trending assumption.
Although the QC1 parameters computed and controlled by QC Garching are available via our Web pages, they are not easily associated with the calibration products available from the ESO Science Archive. By the end of this year, we hope to have a new QC1 parameter database within the Archive domain. Once this database exists, it should be possible for users to retrieve the QC1 parameters associated with the calibration products they are retrieving from the Archive. This is particularly important in the context of Virtual Observatory development.
The calibration data flowing through QC Garching contains a rich but largely unexploited reservoir of information about Cerro Paranal as a site. QC Garching, in collaboration with other group within ESO, has started several projects to process this information and make it available to our customers. For example, this year we will publish a high signal-to-noise, high resolution sky atlas extracted from many hours of UVES observations, as well as a study of optical sky brightness as a function of lunar phase, lunar distance, time after twilight, etc, derived from FORS data. Possible future projects include the creation of lists of faint, secondary photometric standards for FORS and ISAAC, in collaboration with Paranal Science Operations.
For historical reasons, our QC web pages (http://www.eso.org/qc) are implemented in a very heterogeneous way. We are reorganising and revising these pages to make them more homogeneous across instruments, and to make it easier for our users to find the information they need.
Of course, our main priority this year
is to establish regular QC operations for the latest VLT instruments: NACO,
VIMOS, and FLAMES, as well as extending our process to the VLT Interferometer
complex. These instruments introduce many new and complex modes: optical
interferometry, adaptive optic imaging, high density multi-object spectroscopy
with slits and fibers, and integral field spectroscopy. The underlying,
detector based health and wellness QC process are essentially extensions
of our current process, but the development of a higher level QC process
will be more challenging.
As mentioned above, the VLT QC revolves around calibration data. Most quantitative QC is done using calibration products, e.g. dispersion solutions or master flat fields. It is the responsibility of DFO Garching to produce such calibration products and then re-use them in a number of ways:
When a Service Mode run is completed, DFO creates and delivers a standard data package to the run Principal Investigator. This data package contains all the raw science and calibration data, pipeline science and calibration products when available, and a variety of supporting listings and reports. Technical support (e.g. media manufacturing) is provided by the ESO Science Archive team.
Last but not least, DFO maintains extensive documentation about what we do and how we do it, on our QC Web pages: http://www.eso.org/qc/. Our detailed descriptions of how science and calibration data are processed using the current generation pipelines may be particularly interesting to users.
Acknowledgements. The QC process
described here is the result of the joint work of the QC Garching team
which is constituted, apart from the authors, by Wolfgang Hummel, Roberto
Mignani, Paola Sartoretti, and Burkhard Wolff. We also thank our past DFO
colleagues Paola Amico, Ferdinando Patat, and Bruno Leibundgut, and all
our PSO colleagues, especially Andreas Kaufer.
Figure 1. The QC and trending pyramid
Figure 2. Quality plot for a UVES FORMATCHECK frame. Such frames are taken daily to control the proper adjustment of the gratings and cross-dispersers. Main focus here is the proper clustering of the line positions found (boxes 1 and 2 with the difference between predicted and found line positions) and the proper coverage of all orders with identified positions (box 3).
Figure 3. This web interface (http://www.eso.org/qc/UVES/qc/qc1.html) connects to the QC1 data of UVES. The user may view trending plots, and download the corresponding ASCII data. A quick-look panel for the current period links to all current trending plots, i.e. those which are relevant for the present instrument health.
Figure 4. Measured thermally induced drifts of UVES grating #4, without (left), and with (right) thermal compensation in Y direction. The QC1 trending data have been used to establish the coefficients for the automatic compensation of thermal motion in Y (cross-dispersion) direction.
5. The transmission of the CuSO4 filter used for reducing scattered
light in the blue arm of UVES dropped significantly in July 2001. This
was only discovered in November 2001 when the corresponding trending procedure
had been established. An inspection of that filter verified its poor state:
its coating was partly destroyed by humidity. Its replacement in December
2001 has improved the efficiency considerably, which is clearly visible
in the trending plot.
Figure 6. Image quality of FORS1 (width of stellar images in arcsec) versus DIMM seeing. Input data have been collected from pipeline-processed FORS1 science images in filters UBVRI. Correction factors have been applied for wavelength and airmass. Green dots mark high-resolution collimator data, black stands for standard-resolution. The broken line indicates FORS IQ = DIMM. The red line is a fit to the data. FORS1 image quality is on average better than DIMM seeing above 0".8.
Figure 7. Three years of FORS1 zeropoints in the V band. Major interventions, causing steps in the slope, have been: mirror recoatings in February 2000 and March 2001, sudden degrading of main mirror due to rain in February 2001. The move to UT3/Melipal in August 2001 is invisible in this plot.
Figure 8. ISAAC photometric zeropoints for Period 68, in photometric bands Js (green squares), J (red diamonds), H (blue crosses), and K (asterisks). Horizontal axis is MJD-OBS, vertical axis is zeropoint in magnitudes. Last civil date on the plot is 2002-04-01.