Spatial Coverage (2d): a. Spatical Coverage has been so far described with a Polygon. That is a detailed representation of the sky embraced by a data product. But it is not enough. It's ok to describe the coverage, eg in order to plot it. But what if a user is querying and asking all for products with a Field of View bigger than X arcmin ? b. Typically the coverage of an archived (eg ESO, HST) observation is described in terms of a field of view (fov_x, fov_y), a roll angle, the coordinates of the pointing associated with a given reference pixel. This is ok because a detector shape is approximately rectangluar. But it is not enough. What if there is severe geometric distorsion (see HST/ACS) ? (The rectangular shape of the detector maps to a romboidal shape on the sky.) What if the data product is a mosaic of several dithered observations ? (Depending on the number of observations and on the offsets, the final product will have a very complex shape. ) A more complete Spatial Coverage description is required to satisfy different points of view. Hence, Spatial Coverage should contain both a detailed (vectorial) and a coarse (scalar) representation of the FoV. The first to be used in a vectorial context (eg plotting, intersecting), the second in a scalar context (eg, queries): 1) A Polygon (or other shape) 2) one of the two (or both?): i A representative number indicating the overall Field of View (eg FoV_ref_value = sqrt( fov_x**2 + fov_y**2 ) ii A rectangular representation of the field of view in terms of (fov_x, fov_y, roll_angle, pointing_coords, reference_pixel) To be used when (2.i) would be misleading, eg, for very elongated products (eg, a la Groth strip). Astrometric accuracy: An overall estimate of the astrometric precision must be given. It could be as simple as providing the typical astrometric accuracy of the catalogue used to derive the astrometry of the product, eg if USNO was used: 0.25 arcsec. Each vertex in the polygon above is affected by this error. More complex cases could exist: suppose a image cutout service returns an image whose pixels come from two different survey stripes. Different stripes might have different astrometric accuracies. Hence the product returned by the image cutout service is characterised by different astrometric accuracies in different part of the image. (maybe the cutout service should not provide such images) Spatial Coverage: List of required quantities Polygon (as a list of (eg RA,DEC) pairs, in a given reference frame) FoV diameter (as a single number, in arcmin) FoV (fov_x, fov_y) Pointing (WCS, including CRTYPs CRVALs CRPIXs, and Roll Angle) Spatial Sensitivity: The Spatial Coverage has to be complemented with the actual description of the sensitivity (to be discussed in a different chapter). The idea being that not the whole FoV is necessarily seen with the same sensitivity. Within the FoV there will be regions suffering of vignetting, regions whose S/N is different because the product is a mosaic of dithered observations, etc. But a fairly simple first order approximation might be to include the upper and lower limits for the flux (eg the saturation level and the limiting flux at a given S/N level). The accuracy in the determination of the limiting flux is to be provided, and it depends on other quantities like the ZeroPoint of the image (usually estimated using standard stars), and the AirMass for a ground-based observatory. Such accuracy is the Photometric accuracy, and could be (first order) just a number in flux or magnitude units. Spatial Sensitivity: List of required quantities Flux Lower Limit (saturation) Flux Upper Limit (limiting flux/magnitude) Photometric info (zeropoint, airmass) Photometric accuracy Spatial Resolution & Sampling: Light is collected by a telescope, after having or not passed through the atmosphere, and focused onto a detector. The dector is then read out. Such apparently simple process brings into our game various distorsions and effects. i.- Seeing (atmospheric) Usually monitored by the observatory (DIMM seeing). Typically provided as an avg value, sometimes with an error: (AvgSeeing, ErrSeeing) ii.- Instrumental Point Spread Function (IPSF) Ideally the IPSF would be the Airy PSF {1.22*Lambda/D} Sometimes, the PSF is position dependent -> IPSF=IPSF(x,y) iii.- Pixel Response Function (also position dependent -> PRF(x,y)) iv.- Sampling on a equally-spaces grid The grid is characterised by the pixel scale, which depends on the focal length and the phisical pixel dimensions, and it is usually given in seconds of arc: pixel scale := (px,py) (the X and Y sizes might differ) v.- the readout process might rebin the pixels, hence introducing an effect in both the pixel scale, and in the signal to noise ratio. The Spatial Resolution (sometimes refered to as the Quality of the image) could be described by providing a single quantity: the FWHM measured onto the data product, which is the covolution of the various above-mentioned components. Usually, the following heuristic formula is used: FWHM = sqrt( Seeing**2 + IPSF**2 + [ (px+py)/2 ]**2 ) = sqrt( Seeing**2 + (1.22*Lambda/D)**2 + [ (px+py)/2 ]**2 ) A real world example, the HST/WFPC2 case (no seeing in space): FWHM = sqrt( pixel_scale**2 + (1.22*Lambda/D)**2 ) which is ok since the FWHM does not vary that much within the FoV. More generically, a first order approximation would be to provide the usual triplet (loValue, refValue, hiValue) to describe the PSF. The Sampling can be described by providing the pixel scales (px,py), the actual number of pixels in both X and Y (eg, NAXIS1, NAXIS2), and the ration between the Spatial Resolution FWHM and the pixel scale to know whether the image is undersampled or not. Electronic rebinning is also to be provided. Spatial Resolution & Sampling: List of required quantities Seeing ErrSeeing PSF_FWHM (loValue, refValue, hiValue) [wavelength and position dependent] Sampling (aka Detector grid): Original pixel scales (px,py) Binning Factor (electronic, eg 1x1, 2x2, etc) Bin Size (product of pixel scale and binning factor) Numbero of Bins (naxis1, naxis2) Conclusions All in all, by considering both the Spatial Coverage/Sensitivity, Resolution and Sampling, the first order approximation required quantities to be included in the model are: Polygon (as a list of (eg RA,DEC) pairs, in a given reference frame) -> other shapes can be considered of course CoordinateSystem (CRTYPs) { for transformation (x,y) -> (skyCoords) } CoordinatePointing (CRVALs) { for transformation (x,y) -> (skyCoords) } CoordinateDetector (CRPIXs) { for transformation (x,y) -> (skyCoords) } Roll Angle { for transformation (x,y) -> (skyCoords) } FoV diameter (as a single number, in arcmin) { for query purposes } Aperture (fov_x, fov_y) { for query purposes } Astrometric accuracy (single number, in arcsec) { for characterisation } Seeing { for characterisation } ErrSeeing { for characterisation } PSF_FWHM (loValue, refValue, hiValue) [wavelength and position dependent] { for characterisation } Binning Factor (electronic, eg 1x1, 2x2, etc) { for S/N computation } Bin Size (product of pixel scale and binning factor) Numbero of Bins (naxis1, naxis2) Flux Upper Limit (the saturation level) Flux Lower Limit (the limiting flux/magnitude) Photometric info (zeropoint, airmass) Photometric accuracy (in delta magnitudes in the optical, better in flux) A more detailed description would include various position and wavelength dependent quantities: PSF, PRF, Sensitiviy maps, Exposure maps etc.