Quick links: overview | Release content: content | data selection | Release notes: pipeline description | data reduction | master calibrations | format&metadata | data quality | known features&issues | tips and tricks | Data format: file types | file structure | file size | acknowledgement text
ESO Phase 3 Data Release Description
MUSE-DEEP science products
Abstract. This is the release of reduced deep IFU datacubes from the MUSE spectrograph, taken in the Wide Field Mode. MUSE, the Multi-Unit Spectroscopic Explorer, is an Integral Field Spectrograph located at the VLT UT4 telescope. It has a modular structure composed of 24 identical IFU modules that together cover a 1 squared arcmin field of view (FOV). The instrument samples almost the full optical wavelength range with a mean resolution of 3000. Spatially, the instrument samples the sky with 0.2 arcseconds spatial pixels in the currently offered Wide Field Mode with natural seeing (WFM-NOAO), and, since 2017, also assisted by the UT4 AO system GALACSI (WFM-AO).
Each deep datacube is combined from observations across OBs (Observing Block: a single pointing on the sky and the fundamental unit of the VLT observations). Where multiple visits of the same target exist, with multiple OBs, the deep datacube combines the input files from these OBs with the goal to reach the maximum possible depth of the observations. There are also many OBs that visit a given target only once. Then no deep datacube exists. Therefore, the MUSE and the MUSE-DEEP releases are generally complementary. We have successfully combined deep datacubes with slightly more than 120 input files. The deepest datacubes represent a total integration time of 30 hrs, some with even deeper (small) parts.
This release is an open stream release. The release covers the two MUSE Science Verification periods in June and August 2014, and data from the regular MUSE operations which started in September 2014. Data from the AO Science Verification period in August and September 2017 are also included. Depending on the availability of an end-of-run signal, new data are processed within a month or two after that signal, or with a larger delay in some cases.
The data have been reduced with the MUSE pipeline, version muse-1.6.1 and higher. See Weilbacher et al. 2012 (http://adsabs.harvard.edu/abs/2012SPIE.8451E..0BW) for a description, and Weilbacher et al. 2016 (http://ascl.net/1610.004) for the code reference. The data reduction has two steps: removal of instrument signature, and combination of all products from that step into the deep datacube. Resampling has been done once, at the latest step. Error propagation is the same as for the OB datacubes. Sky correction is also the same, except for the case of crowded fields (globular clusters) where no sky correction is applied.
The Quality Control Group at ESO processes the data in an automated process. In an initial step there is an interactive selection of programmes and candidate targets. Then, each observation is pipeline-processed with time-matching, quality-controlled, certified and archived master calibrations. The reduction process is largely automatic. There is an automatic scoring process for the quality control, and a semi-automatic review and certification process for the data products, focusing on non-zero scores.
The data format follows the ESO science data products standard for datacubes (under ‘Quick links’, ‘ESO SDP standard’, ‘Integral Field Spectroscopy: 3D Data Cubes’) and is the same as for the OB datacubes.
This data release offers data products which are considered to be ready for scientific analysis, i.e. with instrument and atmospheric signatures removed, calibrated in physical units and including error estimates.
Disclaimer. Data have been pipeline-processed with the best available calibration data. However, please note that the adopted reduction strategy may not be optimal for the original scientific purpose of the observations, nor for the scientific goal of the archive user. There might be cases where the selection of input data was not optimal to reach e.g. the highest possible spatial resolution.
This release description describes the specific aspects of the MUSE-DEEP processing, while the aspects common with the MUSE release are mentioned only briefly for conciseness. Their details can be found in the MUSE release description.
This release is a stream release. The data are tagged "MUSE-DEEP" in the ESO archive user interface.
The release starts with the two MUSE Science Verification periods in June and August 2014, and includes data from the regular MUSE operations which started in September 2014. When a signal is available that a run has been finished , new data are processed and added a month or two after that signal. If no such signal is available, the delay can be half a year, and even longer for Large Programmes when it is not obvious that the collection of data for a given target is finished. This is also true for data from GTO runs, carry-over runs and large programmes covering several periods.
Typically, new data are processed once per period, with a typical delay of about a year. Some datasets have a longer waiting time because it is not always obvious when the collection of data for a given target is finished. This is particularly true for data from GTO runs, carry-over runs and large programmes covering several periods.
Although we try to be as careful as possible with the selection of completed datasets, rare cases might occur where data collection continues after our deep datacube has been processed and archived. In that case we replace the previous version by a newer deeper version, with the older version still being available on demand.
The names of all input raw files are recorded in the header of the corresponding data product (header keywords PROVi).
The purpose of the deep combination is the maximized signal contrast (SNR), with 2 related aspects:
In most cases, multiple visits of the same target have been designed by multiple OBs within the same programme, with the PI-intended goal to reach the maximum depth of the observations. In a few cases, we have found multiple visits designed by different programmes. While many of them are still designed by the same PI (in different periods) and represent the same logical programme, some of them are coming from different programmes and different PIs. We have decided to combine these "multi-PI" OBs in a single deep datacube. In these cases the data product might go even deeper than intended by the respective PIs. (We do not guarantee to have discovered all of these cases.)
All input data qualifying for the MUSE processing were reviewed for the MUSE-DEEP project.
Mode and setting selection is the same as for MUSE:
The WFM-AO mode has a gap without signal between about 580 and 596 nm (N range), and 576 and 601 nm (E range), respectively, due to laser-induced sodium lines. If existing, we have also co-added data taken in AO and NOAO modes. This is justified because in general these programmes are designed to have matching seeing constraints. Note that in those cases there is no spectral gap (it is filled with NOAO data) but the SNR is lower across this range, and there might be steps in the spectral fluxes because of the different number of combined spectra within and outside the sodium range.
We used the following information sources for the candidate selection:
The programme scan helped to identify the qualifying runs. We found that for the first year of MUSE operations about 50% of all programmes were advertised as going deep. By selecting all QC reports for those programmes (or runs) and sorting them by target name, we were able to safely identify all multi-OBs. In case of non-unique target names, or in complex situations where the targets were larger than the 1'x1' field of view of MUSE, the previews from the QC reports were used for a final decision.
Applied guidelines for the selection:
1. Seeing. Many combination candidates were taken in Visitor Mode (VM), in GTO time. Then, no OB grades are available, and the final selection of input files was based on an as-sessment of the measured seeing conditions. The rejection criteria we applied were relaxed, only strong deviations (i.e. by a factor 2 or so) from the requested conditions were used for rejecting input candidates. This strategy is consistent with what we found in some PI publications.
In Service Mode, we applied the same criteria. Often we accepted OBs graded C, if that grade was only due to a "mild" violation of the seeing constraint. If there were other problems with the data, as documented in the OB comments, these were taken into account (if found applicable).
2. Photometry. For the deep combination, photometric conditions (CLR, THN, THK) were ignored. Photometric accuracy for data combined from different nights is not our goal. Precise photometric information can be derived from single-OB datacubes taken under photometric conditions.
3. Cosmetics, in particular satellite trails. We have rejected in a few extreme cases input files with strong satellite trails, but fainter ones were deemed acceptable since normally satellite trails (or generally transient sources) affect only a small portion of the FOV.
4. Background. We have trusted the scheduling decision at the telescope and have not rejected input candidates because of background criteria, with one exception: if the OB comment says "aborted due to increasing background", these data have been rejected.
5. Other issues. On an individual basis we have rejected exposures with nightlog comments like "aborted because of derotator issue", unless it turned out that the data are ok.
Previews from the MUSE processing. In the process of target and OB selection, we benefitted a lot from the information gathered with the OB-based combined datacubes from the MUSE project, so that we could apply our selection based on the full information of the FOV image and of the processing results.
Combination by criteria other than by target. In a few cases, the OB combination by target was inadequate, in particular for exposure time sequences, or if different pointings were collected in a single OB. These cases could be identified safely, and the final deep datacubes were then constructed using common pointings, and/or common exposure times.
Products. Any given input dataset (defined by target) consists of N OBJECT frames and M SKY frames, coming from at least 2 OBs. N must be at least 2, and its maximum value is limited to 125 (due to the 2 TB memory available). M is often zero (many deep observations have no dedicated SKY pointings). The product is always 1 DEEP COMBINED datacube per target.
Relation between MUSE and MUSE-DEEP releases. For the runs which do not attempt to go deep, the COMBINED datacube in the MUSE release is the final product. Likewise, there are runs which have some targets with deep observations and others with a single visit. For those single OBs, the COMBINED datacube in the MUSE release is the final product. Of course, if there is a SINGLE datacube only (one exposure in one OB), this is the final product. Therefore, the MUSE and the MUSE-DEEP release together should both be queried for datacubes of a given target or a given run. Only if there is a MUSE-DEEP datacube, the corresponding OB-based MUSE datacubes are in principle obsolete for analysis, but might still be useful for photometry, best-seeing analysis, multi-epoch variability studies and and for cross-checks. See Table 1 for an overview.Table 1. Cases of input dataset definition
If you need access to the single datacubes that participated in a combined datacube, there is a special download channel for them, as described in the MUSE release description.
Multiple run IDs. Many MUSE programmes that go deep are split into different run IDs that need to be combined across periods. These data are unfortunately not marked by any metadata key to belong together. (ESO is offering the CONTAINER mechanism to mark OBs belonging together, but this feature is optional and not consistently used by PIs.) We had to use several fuzzy criteria to identify them, e.g. common target names, OB naming schemes, programme titles, etc. The final confirmation was often only possible by the QC report of the FOV image.
Find the detailed description of the recipes in the Pipeline User Manual (under the MUSE link), section 9 (recipe reference). Find the pipeline version used for this processing in the header of the product datacube, under "HIERARCH ESO PRO REC1 PIPE ID". The version for the first dataset was muse_1.6.1. Information about the MUSE pipeline (including downloads, manuals, cookbook) can also be found under the above URL. The MUSE pipeline has been written by Peter Weilbacher (see Weilbacher et al. 2012 http://adsabs.harvard.edu/abs/2012SPIE.8451E..0BW for a description, and Weilbacher et al. 2016 http://ascl.net/1610.004 for the code reference).
The QC pages contain further information about the MUSE data, their reduction and the pipeline recipes for calibration data. Monitoring of MUSE performance and quality parameters is provided under the Health Check monitor (select MUSE).
Reduction steps, overview. The data reduction uses a cascaded recipe scheme, with two main parts. It is the same for NOAO and AO data. AO data are reduced with the proper AO calibration data associated.
The first part works on individual input raw files. No combination is done at that stage. First, every input raw file (OBJECT or SKY) is pre-processed with the recipe muse_scibasic. Then, the SKY product files (if any) are further processed with the recipe muse_create_sky to create the SKY_LINES and SKY_CONTINUUM files for the later sky subtraction. The sky contribution is evaluated by considering the information on the in-strument line spread function, which is contained in the LSF_PROFILES master calibration file. Next, the OBJECT product files are processed with the recipe muse_scipost, using the SKY products (if existing) for the sky subtraction. (Contrary to the MUSE project, the MUSE-DEEP release has no shallow datacubes based on SKY observations.)
After the muse_scipost step, all input OBJECT files have a PIXEL_TABLE product with the pixel coordinates stored in a table, and an IMAGE_FOV product (a 2D collapse) used for the alignment correction. These products can be considered as being free from instrumental artefacts (with known limitations). Therefore the next step is possible, the combination of data from potentially many OBs and different nights. This step aims at collecting as many signal photons as possible, while reducing the noise due to sky background and shot noise. The pixel-table format guarantees that the signal from every single pixel is preserved and not compromised by numerical binning at an early step.
In the second part of the science cascade, all PIXEL_TABLEs belonging together (as defined by the initial target selection) are combined. Two steps are necessary: first, the input IMAGE_FOVs are processed with muse_exp_align to measure the relative alignment of the input data, in order to detect and correct for possible alignment errors due to instrument wobble (see below). Then, finally, the input PIXEL_TABLEs are processed with muse_exp_combine which applies the alignment correction, and finally resamples the overlapping pixels in order to go deep. It is only at that last step that the input data are resampled. The output of that last step is the COMBINED DATACUBE called DATACUBE_DEEP, and the combined IMAGE_FOV_DEEP. Find the overview of the recipes in Table 2.Table 2. Overview of MUSE-DEEP science reduction cascade.
Reduction steps, details. For the details about the reduction cascade we refer to the MUSE release description. We follow the same numbering scheme for easy reference, with annotations as required.
Part 1, single pixel-table.
1.2 muse_create_sky: same as for MUSE.
1.3 muse_scipost: same as for MUSE, except for the last step which is:
There is no resampling into a single datacube (since this can never be a final product for MUSE-DEEP).
The pipeline parameters for this recipe are set to their default values, except for the following parameters:
1.4 muse_scipost for SKY: not applied.
The processing method CROWDED has been implemented for the cases of crowded field observations (globular clusters) without SKY. Contrary to the MUSE release, these cases are known in advance for MUSE-DEEP. The corresponding MUSE datacubes suffer from an over-subtraction of the SKY background which is determined on the OBJECT data, with the level of over-subtraction depending on the prevailing seeing.
In this situation it seems a better strategy for MUSE-DEEP to not subtract sky at all. The data analysis of the final datacubes needs to be done with aperture photometry anyway.
Part 2, combined datacube.
2.1 In the second part the pipeline recipes work on the products (pixel-tables and FOV images) from all input files together. The recipe muse_exp_align is used to create a coordinate offset table for automatic exposure alignment. This step is particularly important for the deep processing since it corrects instrumental alignment errors which potentially are larger across OBs and across different nights than within a single OB.
In order to always have an alignment solution, the following pipeline parameters are used:
2.2 Finally the output OFFSET_LIST table from muse_exp_align and the pixel-tables are combined into the final combined datacube.
The pipeline log files for all steps are stored in the text file that is delivered with each datacube. While that information is technical, it might help with the understanding of the individual steps and might also serve as reference in case a user wants to redo certain reduction steps.
Master Calibrations used for data reduction. This is identical for MUSE and MUSE-DEEP. Check the MUSE release description.
Wavelength scale. The MUSE IFU products are wavelength calibrated. The wavelength scale is barycentric.
Telluric absorption. Telluric absorption lines have been corrected file by file and night by night with the STD_TELLURIC file that was derived from a standard star observation (the same as for the flux calibration). The other comments in the MUSE release description apply here as well. For the deep combination, it is not unusual to include observations from a considerable time span (90 days or more). The residuals of the corresponding telluric systems then do not overlap exactly in the barycentric rest frame, which might result in an additional broadening corresponding to +/- 30 km/s at most.
Flux calibration. All comments in the MUSE release description apply for MUSE-DEEP as well. For the DEEP combination scheme, it is clear that the goal is to optimize the SNR, while a photometric accuracy cannot be guaranteed. The quality of the photometry in a COMBINED datacube (if observed under photometric conditions) is likely better than in a DEEP datacube, and should therefore be retrieved from there. We have explicitly not suppressed any input file because of poor photometry.
Master calibration names and recipe parameters used for reduction. Check the MUSE release description.
The final MUSE-DEEP science data product has two 3D image extensions:
The following additional FITS file is delivered together with the MUSE-DEEP datacube:
It is useful for previewing the product file in image viewers like rtd.
In addition, there is an associated text file delivered that contains the combined pipeline logs with all executions steps for all participating input files, and also the OB grades and comments for them.
There is a set of png files that serve both as QC plot and as preview of the FOV. There is always one for the final deep datacube, and N corresponding ones if N single files participated. (Remember that these individual datacubes are NOT delivered.)
The spectra contain some header keywords added that are related to the QC process. They are listed in Table 3.Table 3. FITS keywords added
Master calibrations. All comments from the MUSE release description apply.
QC, review and certification process. The MUSE-DEEP datacubes have been reviewed and certified by a process involving both automatic scoring and human-supervised certification. Both the single products (output of muse_scipost) and the deep combined datacubes (output of muse_exp_combine) are exposed to the QC process.
For the intermediate single products, the QC system scores key parameters like
For the deep combined datacubes, the QC parameters are
The measured values are compared to reference values and scored. A non-zero score flags a potential issue. All deep combined datacubes are reviewed. QC comments are propagated to the datacube headers.
QC flag. Similar to the MUSE datacubes, the MUSE-DEEP datacubes have the header key "QCFLAG". It is composed of 10 bits (Table 4). The value 0 always means "OK, no concern". This schema is largely identical to the one for MUSE datacubes, except for their last bit #11 (dataset completeness) which has no meaning here. All comments about the score flags in the MUSE release description apply, except for:
Flag #10 refers to the alignment of the input data. Since the combination was always checked by eye, values 0 or 1 have no particular meaning and have been added for completeness only.Table 4. Definition of QC flags. Find the up-to-date list under the URL http://www.eso.org/qc/PHOENIX/MUSE/score_bits_deep.txt.
QC plots and previews. The QC and preview plots have been originally developed as quick-look plots for the process quality control. It was felt that they might also be useful to the archive user. They are delivered as associated files along with the products. There are two types of plots:
Process quality control. The quality of the data reduction is monitored with quality control (QC) parameters, which are stored in a database. The database is publicly accessible and has a browser and a plotter interface.
Error propagation. This is the same as for MUSE datacubes and is described in their release description.
Limiting magnitude ABMAGlim. Each deep datacube has a QC parameter ABMAGlim. Its exact definition is described in the MUSE release description. The deep datacubes are expected to have a correspondingly higher value of ABMAGlim than the single or the OB-based datacubes, except for pathological situations like crowded fields.
In Figure 5 we display this QC parameter for all deep datacubes, versus their total exposure time. We have selected only values for single pointings (excluding datacubes with several, partly overlapping pointings), because the limiting magnitude is a concept assuming applicability across the entire field of view. We have also identified in this plot those datacubes with a background that is presumably not dominated by background noise:
They are plotted in red. These datacubes cannot be expected to have their ABMAGlim improved with increasing exposure times.
A general trend towards ABMAGlim increasing with total exposure time is clearly visible. There is some saturation in the ABMAGlim values, they do not go beyond about 26.5 mag. We believe that this is due to several effects which all have to do with the definition of this parameter, and likely not with the intrinsic quality of the datacubes. The definition of ABMAGlim refers to the narrow noise peak, as seen in Figure 10. Upon collecting more and more input files, it becomes more and more dominated by small residual gradients in the background. This effect would likely become less pronounced if the background noise would be determined in small sub-windows (which is not an option for the processing concept based on the MUSE pipeline).
As illustrated in Figure 6, the same parameters displayed for the single datacubes from the MUSE release show the systematic and expected trend. We have again marked the crowded or extended fields which are subject to the systematic effects. In particular the crowded fields get their background over-subtracted in the OB-based MUSE reduction scheme.
In Figure 9 we display the ABMAGlim values for a set of programmes designed to go deep, targeting at the Hubble UDF. One programme is collecting a total of 1 hour per pointing, the other one collects about 10 hours in each of 9 pointings. There is clearly the trend towards higher ABMAGlim values for longer exposure times.
In Figure 10 we demonstrate how the background noise peak narrows upon going deep. The FOV image of the deep datacube (right) shows how many faint sources peak out of the narrow noise pool which are not seen in the single (left) and the OB-based datacubes (middle). It also demonstrates that the sky residuals (at least partially) cancel out upon deep combination.
Mapping deep datacubes and total exposure time. In a few cases, deep exposures have been obtained for fields that are larger than the 1'x1' MUSE field of view. Often PIs have then designed OBs with e.g. four pointings that have some overlap. See a typical example in Figure 11. Whenever technically possible (in terms of total number of input files, currently limited to about 125) and reasonable, we have combined those pointings in one single datacube.
The MUSE pipeline does not provide exposure maps. For situations like the one sketched in Figure 11, it is straightforward to derive the exposure map. For more complex situations (like in Figure 12 and Figure 13) it is best to obtain an overview of the pointings from the preview plots of each input exposure. In such complex situations, the effective exposure time per pixel is a weighted average (EXPTIME). This is then also true for ABMAGlim.
Most deep maps are similar to the one from Figure 11, but some are more complex. In the following we sketch the most complex situations we have encountered so far. Figure 12 shows a mapping like in the previous figure, with an additional central pointing. For this deep datacube, the exposure map becomes a bit complex. It can be derived by compiling the individual FOV plots.
In Figure 13 we illustrate another configuration with a 3x3 grid and an additional deep exposure. With a total of 275 input files we were unable to process all of them into a single deep map. We could come close to the ideal solution with 5 deep datacubes for pointings UDF-03, 06, 07, 08, and 09, plus one deep datacube combining UDF-02, 04, 05, and 10, and a final one combining UDF-01 and 10. Note that in this exceptional case we have used the photons from pointing UDF-10 twice, a situation which is so far unique within the MUSE-DEEP release.
General. Files known from the MUSE release to have issues like guiding errors, derotator problems etc., have not been selected for the MUSE-DEEP datacubes.
Misalignment. While all deep datacubes have been checked visually for misalignment (at the IMAGE_FOV level), and while there are also automatic checks, there is a non-zero chance that cases of misalignment have been overlooked. In Figure 14 below is an example how subtle this effect can be. The double sources might be overlooked easily, and the ring-like signature becomes visible only with the brightness thresholds set properly. We strongly recommend visual checks for this issue. You may want to point your viewer (e.g. ds9) to a wavelength where (redshifted) emission lines become visible, which might give a brighter signal than continuum sources, and also choose the dynamic range appropriately.
Post-pipeline removal of sky lines. See the MUSE release description.
Analysis software package. See the MUSE release description.
Working with pipeline log files. See the MUSE release description. In addition, the log file has a section 3 at the end ("Selection file for this combined datacube"). It lists the products of the selected input OBs, with the OB IDs, OB names, the user-defined ambient constraints for the seeing ("AMBI_REQ"), the OB grades and the OB comments. The listed pipeline product names are the names of the COMBINED datacubes that are also available as MUSE datacubes. Finally, all raw file IDs used for the deep datacube are listed.
The primary MUSE-DEEP product is the 3D datacube:
Each product has exactly one ancillary FITS files:
Furthermore the following non-FITS files are delivered with each datacube:
The following naming convention applies to the ORIGFILE product: e.g. the name
The ancillary files have the following ORIGFILE names:Table 5. Naming conventions of ANCILLARY files
The user may want to read the ORIGFILE header key and rename the archive-delivered FITS files accordingly.
The MUSE datacube product has two 3D image extensions:
The typical size of a deep datacube is 3-5 GB if it was collected from one pointing only (with small jitter offsets). The size grows in proportion to the number of non-overlapping pixels. The larger values apply to datacubes with orientations inclined with respect to the RA/DEC grid.
According to the ESO data access policy, all users of ESO data are required to acknowledge the source of the data with an appropriate citation in their publications. Find the appropriate text under the URL http://archive.eso.org/cms/eso-data-access-policy.html .
All users are kindly reminded to notify Mrs. Grothkopf (esodata [at] eso.org) upon acceptance or publication of a paper based on ESO data, including bibliographic references (title, authors, journal, volume, year, page numbers) and the observing programme ID(s) of the data used in the paper.