OWL Concept study
Roberto Gilmozzi and Philippe Dierickx,
European Southern Observatory

This article has been published in the 100th issue of the ESO Messenger, June 2000. 


Introduction
Why 100m ?
Why 100-m?
Feasibility issues: do we need an intermediate step?
Diffraction limit vs. seeing limit
Resolution, resolutely

OWL's performance
Interferometry

Items for the science case
Confusion about confusion
Star formation history of the Universe
Symbiosis with NGST
Measure of H
Supernovae at z ~ 10
Other high redshift Universe studies
High frequency phenomena
Nearby Universe
Extra-solar planets
Operational issues

Scalability: why not?
Telescope conceptual design
Top level requirements
Design considerations
Optics

Adaptive optics
Mechanics

Conclusions
Acknowledgements
References


Introduction

ESO is developing a concept of ground-based, 100-m class optical telescope (which we have christened OWL for its keen night vision and for OverWhelmingly Large), with segmented primary and secondary mirrors, integrated active optics and multi-conjugate adaptive optics capabilities. The idea of a 100-m class telescope originated in 1997, when it was assessed that true progress in science performance after HST and the 8-10-m class Keck and VLT generations would require an order of magnitude increase in aperture size (a similar assessment had been made by Matt Mountain in 19961). The challenge and the science potential seemed formidable -and highly stimulating.

Extremely large telescopes are no new idea: studies for 25-m class telescopes2,3 date back to the mid-70s. Although these studies concluded that such telescopes were already technically feasible, the science case was not as strong as that permitted today by adaptive optics, and underlying technologies were far less cost-effective than they are now. In the late 80's, plans for a 25-m class telescope were proposed by Ardeberg, Andersen et al4; by 2000 the concept had evolved into a 50-m class adaptive telescope5. Preliminary ideas for a 50-m concept were presented1 by Mountain et al in 1996; studies for a 30-m scaled-up version of the Hobby-Eberly telescope have been unveiled6,7 by Sebring et al in 1998 and 1999; and plans for the 30-m California Extremely Large Telescope (CELT) have been unveiled by Nelson et al at the March 2000 SPIE conference in Munich8.

As for OWL, initial efforts concentrated on finding suitable optical design and fabrication solutions. The emphasis on optics is evident in the first (1998) publication made about the telescope concept9, where it was shown that proven mass-production solutions for the telescope optics are readily available. From that point on, further studies progressed as rapidly as permitted by scarcity of resources, strengthening confidence in the concept. Several contributions10,11,12,13,14 were made at the June 1999 workshop in Bäckaskog, Sweden, where, in particular, the basic concept of the mechanical structure was presented12. Industry showed astounding support for extremely large telescope concepts, two major suppliers announcing15,16 that they were ready to take orders. Two essential conclusions of this workshop were that, first, extremely large telescopes were indeed feasible, experts arguing about solutions instead of feasibility per se, and that, second, the future of high angular resolution belongs to the ground, thanks to adaptive optics.

Preliminary analyses have confirmed the feasibility of OWL's major components within a cost on the order of 1,000 million Euros and within a competitive time frame. A modular design allows progressive transition between integration and science operation, and the telescope would be able to deliver full resolution and unequalled collecting power 11 to 12 years after project funding.

The concept owes much of its design characteristics to features of existing telescopes, namely the Hobby-Eberly for optical design and fabrication, the Keck for optical segmentation, and the VLT for system aspects and active optics control. The only critical area in terms of needed development seems to be multi-conjugate adaptive optics, but its principles have recently been confirmed experimentally, tremendous pressure is building up to implement adaptive capability into existing telescopes, and rapid progress in the underlying technologies is taking place. Further studies are progressing, confirming initial estimates, and a baseline design is taking shape. The primary objective of these studies is to demonstrate feasibility within proven technologies, but provisions are made for likely technological progress allowing either cost reduction or performance improvement, or both.

Why 100-m?

The history of the telescope (figure 1) shows that the diameter of the "next" telescope has increased slowly with time (reaching a slope for glass based reflectors of a factor-of-two increase every ~30 years in the last century: e.g. Mt Wilson - Mt Palomar - Keck).

The main reason for this trend can be identified in the difficulty of producing the optics (both in terms of casting the primary mirror substrate and of polishing it). The advances in material production and in new control and polishing technologies of the last few decades, fostered in part by the requirements set by the present generation of 8-10 m telescopes, offer now the exciting possibility of considering factors much larger than two for the next generation of telescopes. And unlike in the past, they also offer the promise of achieving this without implying a lengthy (and costly) program of R&D.

At the same time, advances also in adaptive optics (AO) bring the promise of being able to achieve diffraction-limited performance. Though still in its infancy, AO is growing very fast, pushed in part also by customer oriented applications. New low-cost technologies with possible application to adaptive mirrors (MEMs), together with methods like multi-conjugated adaptive optics (MCAO), new wave-front sensors and techniques like turbulence tomography are already being applied to AO modules for the present generation of telescopes. Although the requirements to expand AO technology to correct the wave front of a 100m telescope are clearly very challenging (500,000 active elements, enormous requirements on computing power), there is room for cautious optimism. This would allow a spatial resolution of the order of one milliarcsecond, prompting the claim that high angular resolution belongs to the ground. Of course, this is valid only at wavelengths that make sense (i.e. 0.3 < l < 2.5 mm for imaging, l < 5 mm for spectroscopy).

Can we afford it (in terms of time and cost).

Another consequence of the recent advances in technology is the fact that we can consider building a next generation telescope within a reasonable time. Since a large R&D phase is not required (with the exclusion of AO, which is however being performed right now under the requirements set by the current generation of telescopes), 10 to 15 year timelines are appearing reasonable.

The cost issue is evidently one that needs to be addressed (even if a 50 or 100m telescope is demonstrably feasible from the technical point of view, it will be impossible to build one unless the D2.6 cost law can be broken). A "demonstration" that cost can be kept at low values has been put into practice by HET (admittedly accepting reduced performance). The introduction of modular design and mass-production (telescope optics, mechanics) is also a new and favorable factor. Based on this and on extrapolating the experience of the Keck (segmentation) and of the VLT (active control), the cost estimates range nowadays between 0.3 to 1 $billion (respectively 30m CELT and 100m OWL). These costs are large (though not as large as, say, a space experiment), but possibly within what some large international collaboration can achieve.

From the point of view of "astronomical strategy", therefore, all this would also allow perhaps to optimize the space and ground facilities according to their natural location (e.g. optical/NIR astronomy from the ground, UV or thermal IR astronomy from space, etc), stressing their complementary rather than competitive roles. And this with the possibility of a reduction in "global" costs (the cost of HST would allow to build and operate at least three OWLs…)

Why 100m.

The original starting point for the development of the OWL concept (at the time called the WTT, alternatively for Wide Terrestrial Telescope or Wishful Thinking Telescope) was twofold. On one side a preliminary and naive science case (what is the telescope size needed to do spectroscopy of the faintest sources that will be discovered by NGST). On the other side the interest in exploring the technological limitations in view of the recent advances, especially to what limit one could push angular resolution. In other terms: could the factor-of-two become an order-of-magnitude?

The progress both of the science case and of the design concept since the early days allows us to give some answers (albeit incomplete) to the question:

  • The HST "lesson" has shown that angular resolution is a key to advance in many areas of astronomy, both in the local and in the far Universe. Achieving the diffraction limit is a key requirement of any design.
  • Milliarcsecond resolution will be achieved by interferometry (e.g. VLTI) for relatively bright objects and very small fields of view. The science case (including the original ‘complementarity with NGST’ one) requirements are now, for the same resolution, field (~ arcminutes) and depth (> 35th magnitude), i.e. filled aperture diameters > 100m.
  • For diffraction limited performance, the ‘detectivity’ for point sources goes as D4 (both flux and contrast gain as D2). One could say that a 100m telescope would be able to do in 10 years the science a 50m would take 100 years to do!
  • Last but not least, technology allows it: the current technological limitation on diameter of the (fully scalable) OWL design is ~140m.

Feasibility issues: do we need an intermediate step.

Another question that arises often is whether we need an intermediate step to whatever size we think we should achieve for scientific reasons (in other words, whether we wish to maintain the ‘factor-of-two’ paradigm even if its technological raison d’être has been overcome). The debate has vocal supporters on both sides (we OWLers are obviously for going directly to the maximum size required by the science and allowed by the technology). "Accusations" of respectively excessive conservatism or excessive ambition are exchanged in a friendly way at each meeting about Extremely Large Telescopes (ELTs). The interpretation of where exactly technology stands and how much can be extrapolated is at the core of the issue. We think this (very healthy) debate will go on for some time yet, and will be the main topic of the OWL Phase A study which is underway (goal for completion: early 2003).

Diffraction limit vs. seeing limit.

Why make the diffraction limit such a strong requirement for ELTs is yet another subject of debate. On this our position is very strong: we consider a seeing limited ELT (deprecatingly named a "light bucket") as a goal not worth pursuing. While it is clear that the atmosphere will not always be "AO-friendly" and that, therefore, concepts of instrumentation to be used in such circumstances should be developed, there are scientific as well as technical reasons to justify our position.

Typically the seeing limit designs go together with wide field (here wide is many arcminutes) and/or high spectral resolution (R>50,000) requirements. Apart from the overwhelming role of the background for seeing-limited imaging (sky counts of thousands of photons per second per pixel for a 50m telescope), source confusion is a major scientific issue. From the technical point of view, building incredibly fast focal reducers, or high-resolution spectrographs with collimators the size of present day telescopes, may pose technical challenges more extreme than building the telescope itself.

On the opposite side, imagers for diffraction limited telescopes need very slow f-numbers (50 or so, although admittedly here the challenge is to have enough detector area to cover a reasonable field, and to avoid severe saturation from ‘bright’ sources). Milliracsecond(s) slits would make the beam size of a high-resolution spectrograph comparable to that of UVES or HIRES (i.e. instrumentation could be considered "comparatively" easy in the diffraction limited case).

In the seeing limited case, a spectroscopic telescope (of say 25-30m and 5,000-20,000 resolution) could occupy an interesting scientific niche. Such a design is being considered as the natural evolution of the HET (Sebring et al), and is the first one to have actually been called ELT (in other words, we have stolen the generic name from them. Another possibility for generic name is Jerry Nelson’s suggestion of calling the future behemoths Giant Optical Devices or GODs. The hint about hubris is quite clear…).

Resolution, resolutely.

Angular resolution and sensitivity are the highest priority requirements. They are also closely intertwined, as high resolution implies high energy concentration in the core of the Point Spread Function (it is not a coincidence that the Strehl Ratio is called Resolution by optical physicists and engineers).

Figure 2 crudely illustrates the effect of increased resolution by showing the same hypothetical 0.6x0.6 arc seconds2 field, as seen by a seeing-limited telescope under best conditions (FWHM~0.2 arc secs), by HST, by an 8-m diffraction-limited telescope and by OWL, respectively. Assuming the pixel size in the rightmost (OWL) image to be ~0.5 mas, the left frames have been convolved with the theoretical Point Spread Functions associated to each case. For the diffraction-limited image the exposure times have been adjusted to provide roughly the same total integrated intensity, taking into account collecting area. A corrective factor has been applied to the seeing-limited image to provide comparable peak intensity (this is due to the oversampling of the seeing-limited image).

Figure 2 also illustrates the fact that field size is a relative concept and should be evaluated in relation to its information content: the 0.6 x 0.6 arc seconds2 field shown here becomes ~1,400,000 pixels when seen by OWL.
 
Figure 2. Resolution, from 0.2 arc seconds seeing to diffraction-limited with 100-m. All images 0.6 x 0.6 arc seconds2
(click here for high resolution image)

OWL's performance.

At ten times the combined collecting area of every telescope ever built, a 100m filled aperture telescope would open completely new horizons in observational astronomy – going from 10m to 100m represents a "quantum" jump similar to that of going from the naked eye to Galileo’s telescope (see figure 1).

We have built a simulator of the performance of the OWL, which can be also used for different size telescopes (and compared with similar calculations presented at the March 2000 SPIE conference or at the Bäckaskog 1999 Workshop on Extremely Large Telescopes, e.g. Mountain et al). The simulator uses the PSF produced by the most recent optical design iteration, and includes the typical ingredients (diffusion, sky and telescope background, detector properties, and as complete as possible a list of noise sources). The output is a simulated image or spectrum (see figure 3).

A magnitude limit for isolated point sources of V=38 in 10 hours can be achieved assuming diffraction limited performance (whether there are such isolated sources is a different question, see below). Comparing this performance with the predicted one for NGST shows that the two instruments would be highly complementary. The NGST would have unmatched performance in the thermal IR, while a ground based 100m would be a better imager at l < 2.5 mm and a better spectrograph (R> 5,000) at l < 5 mm. Sensitivity-wise, the 100m would not compete in the thermal IR, although it would have much higher spatial resolution.

In terms of complementarity, OWL would also have a synergetic role with ALMA (e.g. in finding and/or studying proto-planets) and with VLBI (the radio astronomers have been waiting for us optical/IR people to catch up in spatial resolution for decades!)

Interferometry.

Is interferometry an alternative to filled aperture? The consensus seems to be that this is not the case. Interferometry has a clearly separate scientific niche – for similar baselines its field of view (few arcseconds) and (bright) magnitude limits are definitely not competitive with the predicted performance of a filled aperture telescope. On the other side, baselines of hundreds of meters, if not of kilometers (in space even hundreds of km, as in the NASA plans), might well be the future of interferometry. Looking for the details of comparatively bright objects at the micro-arcsecond level, looking for and discovering earth-like planets, studying the surface of stars even further away are a domain where interferometry will always be first. In a sense, it is a "brighter object" precursor for any filled aperture telescope of the same size that may come in the future.

Items for the Science Case

The science case for the extremely large telescopes of the future is not fully developed yet. Some meetings have taken place on the subject, and more are planned (there will be at least one Workshop on this in 2000). However, it is difficult to think of a branch of astronomy that would not be deeply affected by the availability of a 50 or 100m telescope with the characteristics outlined earlier.

In any event, there are a number of questions that the Science Case should pose, and find answers to, which will affect the final set of requirements for telescopes like the OWL. Do we need the angular resolution? Is 1 milliarcsecond too much, too little, enough? Is investing in AO research justified? Could we live with seeing limited? Can we not? Do we need 100m? Are 50m enough? Are 30m? Are 20m? Should we push even further? What is a sensible magnitude limit? Is interferometry a better alternative or a precursor? Do we need the optical and its tighter design tolerances and extremely more complex AO (especially since the faint/far Universe is very redshifted)? Do we have a compelling science case? Is "spectroscopy of the faintest NGST sources" enough? Is "unmatched potential for new discoveries" relevant? Is "search for biospheres" too public-oriented? Indeed, do we need an ELT?

In the following we will discuss some areas where OWL could give unprecedented contributions. This is by no means supposed to be a complete panorama, but rather reflects some personal biases.

Confusion about confusion.

There is a widespread concern that ELTs may hit the confusion limit, thereby voiding their very raison d’être. Much of this concern is tied to observations obtained in the past, either from the ground or from space, with instrumentation whose angular resolution was very limited (e.g. the first X-ray satellites or the very deep optical images in 2" seeing of the ‘80s). Recent developments have shown that whenever a better resolution is achieved, what looked like the confusion limit resolves itself in individual objects (e.g. the X-ray background, now known to consist mostly of resolved sources, or the HDF images, which show more empty space than objects).

Admittedly, there may be a confusion limit somewhere. However, the back-of-the-envelope argument that "all far galaxies are 1" across, there are about 1011galaxies and 1011 arcseconds, therefore there must be a point where everything overlaps" fails when one resolves a square arcsecond in > 106 pixels (crowding may still be an issue, though). The topic however is fascinating (and tightly connected with Olbers’ paradox), and will be the subject of a future paper. For the purpose of this discussion, however, the only thing confusing about confusion is whether it is an issue or not. There is a clear tendency in the community to think that it is not.

Star formation history of the Universe.

This is an example of a possible science case which shows very well what the potentiality of a 100m telescope could be, although by the time we may have one the scientific problem will most likely have been already solved.

The history of stellar formation in the Universe is today one of the ‘hot topics’ in astrophysics. Its goal is to determine which kind of evolution has taken place from the epoch of formation of the first stars to today. To do so, "measurements" of star formation rates are obtained in objects at a variety of look back times, and used to determine a global trend. These measurements are usually obtained by comparing some observed integral quantities of unresolved objects (typically an emission line flux) with predictions made by evolution models. Although the method is crude, results are being obtained and a comprehensive picture is starting to emerge.

With a telescope like OWL, what are today "unresolved objects" would be resolved in their stellar components. For example, one could see O stars out to a redshift z ~ 2, detect individual HII regions at z ~ 3, measure SNe out to z ~ 10 (see below). Determining the star formation rates in individual galaxies would go from relying on the assumptions of theoretical models and their comparison with integrated measurements, to the study of individual stellar components, much in the way it is done for the "nearby" Universe.

Symbiosis with NGST.

This was the "original" science case for a 100m telescope, and runs much in the same vein as the case made by Matt Mountain1 for a 50m telescope to observe the faintest galaxies in the HDFs. The symbiosis with NGST would however not only be of the "finder/spectrometer" variety (though much science would be obtained in this way), but as explained above also in terms of complementarity in the space of parameters (wavelength coverage, angular resolution, spectral resolution, sensitivity, etc). The feeling is that a science case to complement the NGST is a strong one, but cannot be the main case for a 100m telescope.

Measure of H.

Cepheids could be measured with OWL out to a distance modulus (m-M) ~ 43 (i.e. z ~ 0.8). This would allow the measurement of H and its dependence on redshift (not Ho) unencumbered by local effects (e.g. the exact distance to Virgo). In fact, the distance to Virgo, and the value of Ho, would be determined as "plot intercept" at t=0! There is an interesting parallel to be done here with HST to get a "feeling" of what crowding problems we could have. Crowding would start affecting the photometry of individual Cepheids at about this distance in much the same way it does for HST images of Virgo galaxies. In fact, we would be about 100 times further than Virgo with a resolution about 100 times better than HST (Cepheids are observed with HST mainly in the undersampled Wide Field chips).

Supernovae at z ~ 10.

An "isolated", underluminous Type II supernova like SN 1987A would be visible at (m-M) ~ 53. Assuming that crowding and/or increased background would bring the limit to 50 (i.e. z ~ 10, the exact value depending on one’s favorite cosmology), we would still be able to detect any SN ever exploded out to that redshift (!).

Figure 5 shows model calculations of supernova rates assuming a 1012M¤elliptical galaxy beginning star formation at z = 10. The rates are several dozen per year (i.e. ~ 0.3 per day!). Even for much less massive galaxies the rates are a few per year. This means that any deep exposure in a field < 1 arcmin2 will contain several new supernovae.

Since these SNe will be at high redshift, the observed light curves will be in the rest UV. This actually makes their identification easier, since Type II light curves last typically 12-24 hours in the UV: time dilation will lengthen the curves by (1+z) making them ideal to discover. (Note that the optical light curves, intrinsically some months long, would last years due to dilation).

The study of SNe out to z ~ 10 (if indeed stars started forming at or before that redshift, which is not certain by any means) would allow to access ~ 30% of the co-moving volume (i.e. mass) of the Universe (at present, through SNe we can access less than 3%). Star formation rates at such early ages would be a natural byproduct of these studies. Nearer SNe would be bright enough to provide "light bulbs" to study the intergalactic medium on many more lines of sight than those provided by other bright but less common objects, e.g.QSOs. And of course, although with lower rates and at "nearer" distances (their rate peaks at zI ~ zII– 2.5), the brighter Type I SNe will also contribute to the study.

Other high redshift Universe studies.

A telescope with the resolution and sensitivity of OWL’s would find some of the most important applications in the study of the furthest and faintest objects in the Universe. Among many others, studies of the proto-galactic building blocks and the dynamics of their merging into higher hierarchical structures. The possibility of probing even higher redshifts with Gamma Ray Bursts (if they exist at earlier epochs) is also very exciting, as they are intrinsically orders of magnitude brighter than even SNe.

High frequency phenomena.

Rapid variability is an area where the improvements brought by larger collecting areas can be truly enormous. The power spectrum of such phenomena is in fact proportional to the square of the flux, i.e. P ~ D4. Dainis Dravins showed at the Bäckaskog Workshop that extremely large telescopes open a window on the study of quantum phenomena in the Universe which were till now only observed in the laboratory.

Nearby Universe.

In the nearer Universe we have again a myriad of possible contributions. The detection of brown dwarfs in the Magellanic Clouds would enable to determine an accurate IMF for those galaxies. It would be possible to observe White Dwarfs in the Andromeda galaxy and solar like stars in galaxies in the Virgo cluster enabling detailed studies of stellar populations in a large variety of galaxies. The environment of several AGNs would be resolved, and the morphology and dynamics of the inner parts nearest to the central black hole could be tracked and understood. If the rings around SN 1987A are a common phenomenon, they could be detected as far as the Coma cluster. In our own galaxy, we could study regions like Orion at sub-AU scales, determining the interactions between stars being born and the parent gas. We would detect protoplanetary disks and determine whether planets are forming there, and image the surface of hundreds of stars, promoting them from points to objects. Unlike interferometry (which also can image stellar surfaces, but needs many observations along many baselines to reconstruct a "picture") these observations will be very short, allowing the detection of dynamic phenomena on the surfaces of stars other than the Sun.

Extra-solar planets.


Finally, a critical contribution will be in the subject of extra-solar planets. Not so much in the discovery of them (we expect that interferometry will be quite successful in this), but rather in their spectroscopic study. Determining their chemical composition, looking for possible biospheres will be one of the great goals of the next generation of ELTs. Figure 6 shows a simulation of an observation of the Solar System at 10 parsecs (based on the PSF of an earlier optical design, and including the effect of micro-roughness and dust diffusion on the mirror) where Jupiter and Saturn would be detected readily. Several exposures would be necessary to detect the Earth in the glare of the Sun. Sophisticated coronographic techniques would actually make this observation "easier" (or possible at a larger distances).

Operational issues.

The sheer size of a project like OWL, or any other ELT project, makes it unlikely that the operational scenario would be similar to that of the current generation of telescopes. We believe that the current (mild) trend towards Large Programs (where the need for deep – i.e. long – exposures is combined with the statistical requirement of a large number of measurements) will evolve towards some sort of "Large Project" approach, similarly to what happens in particle physics. In this sense, maybe even the instrumentation plan could be adapted to such an approach (e.g. a Project would develop the "best" instrument for the observation, and when it is over a new Project with possibly new instruments would take over). What we imagine is "seasons" in which OWL (or whatever) will image the surface of all ‘imageable’ stars, or study 105 SNe, or follow the dynamics of the disruption of a star by an AGN’s black hole. In other words, a series of self-contained programs which tackle (and hopefully solve!) well defined problems, one at a time.

SCALABILITY - Why not?

The last two decades of the 20th century have seen the design and completion of a new generation of large telescopes with diameters on the order of 8 to 10-meter. To various degrees, concepts developed on this occasion have concentrated on feasibility of the optics, controlled optical performance, cost reduction, and have been quite successful in their endeavors.

The achievements of recent projects could hardly be summarized in a few lines, but we emphasize three major breakthroughs:

  • Optical segmentation (Keck).
  • Cost-effective optical and mechanical solutions (Hobby-Eberly)
  • Active optical control (NTT, VLT, Gemini and Subaru).
The lessons learned from these projects are, to some extent, already being implemented in a series of projects (e.g. GTC, SALT), but future concepts may quite naturally rely on a broad integration of positive features of each approach. Perhaps the most far-reaching innovations have been brought by the Keck, with virtually unlimited scalability of the telescope primary optics, and by the VLT, with highly reliable and performance-effective functionality (active optics, field stabilization). Scalability was traditionally limited by the difficulty to cast large, homogeneous glass substrates, and progress over the last century has been relatively slow. Indeed, even the relatively modest size increase achieved by the most recent telescopes with monolithic primary mirrors would have been impossible without innovative system approaches (e.g. active optics) which relaxed constraints on substrate fabrication.

Optical scalability having been solved, other limitations will inevitably apply. Taking only feasibility criteria into account, and modern telescopes being essentially actively controlled opto-mechanical systems, these new limitations may arise either in the area of structural design, control, or a combination of both. Our perception is that the fundamental limitations will be set by structural design, an area where predictability is far higher than with optical fabrication. However, it should be observed that, despite the fact that control technologies are rapidly evolving towards very complex systems, those technologies are also crucial when it comes to ensuring that performance requirements are efficiently and reliably met. Reliability will indeed be a major issue for extremely large telescopes, which will incorporate about one order of magnitude more active degrees of freedom (e.g. position actuators). In this respect, however, the Keck and VLT performances are encouraging.

Although there is still major effort to be accomplished in order to come to a consolidated design, it appears already that OWL is most likely feasible within currently available technologies and industrial capacity. Actually, the successive iterations of the opto-mechanical design indicate that OWL diameter is quite probably below the current feasibility limit for a steerable optical telescope, which we estimate to be in the 130-150 meter range.

Adaptive optics set aside, OWL's actual limitation seems to be cost, which we constrain to 1,000 million Euros, capital investment, including contingency. Such budget is within a scale comparable to that of space-based projects and spread over a longer time scale. Additionally, it can reasonably be argued that progress in ground-based telescopes is broadly beneficial in terms of cost and efficiency as it allows space-based projects to concentrate on and be optimized for specific applications which cannot be undertaken from the ground -because of physical rather than technological reasons.

It is obviously essential that the concept allows a competitive schedule, which should be the case as the telescope could, according to tentative estimates, deliver unmatched resolution and collecting power well before full completion.

Telescope conceptual design

Top level requirements

The requirements for OWL correspond to diffraction-limited resolution over a field of 30 arc seconds in the visible and 2 arc minutes in the infrared (l~2 mm), with goals of 1 and 3 arc minutes, respectively. The telescope must be optimized for visible and near-infrared wave bands, although the high resolution still allows some competitive science applications in the thermal infrared14. Collecting power is set to ~6,000 m2, with a goal of 7,000.

The optical quality requirement is set to Strehl Ratio > 20% (goal ³ 40%) at l=500 nm and above, over the entire science field of view and after adaptive correction of atmospheric turbulence with a seeing angle of 0.5 arc seconds or better. We tentatively split this requirement into telescope and atmospheric contributions:

  • Strehl Ratio associated with all error sources except atmospheric turbulence ³ 50% (goal ³ 70%);
  • Strehl Ratio associated with the correction of atmospheric turbulence ³ 40% (goal ³ 60%).
It is not yet entirely clear what the field limitations of multi-conjugate adaptive optics are; preliminary analysis show that under representative conditions, a 3-adaptive mirrors system would provide an isoplanatic field of ~20 arc seconds in the visible; larger fields may require more complex adaptive systems.

Design considerations

We consider that the essential function of the system is to reliably deliver a minimally disturbed -in terms of amplitude and phase- wavefront to the science detector, over a specified field of view. As disturbances inevitably occur -atmospheric turbulence, telescope optics, tracking, etc.-, those must be either minimized or corrected, or both.

It is quite logical to distinguish between atmospheric and telescope disturbances for their very different spatial and dynamic properties, the former being arguably the most difficult to compensate. Therefore, we incorporate into the telescope concept dedicated adaptive modules, to be designed and optimized for correction of atmospheric turbulence at specified wave bands, and we request that the telescope contribution to the wavefront error delivered to the adaptive module(s) be small with respect to the wavefront error associated with atmospheric turbulence. In brief, we request the telescope itself to be seeing-limited. It should be noted that, in purely seeing-limited mode where the relevant wavefront quality parameter is slope, the aperture size implies that fairly large wavefront amplitudes can be tolerated. For example, a wavefront tilt of 0.1 arc seconds over the total aperture corresponds to a wavefront amplitude of 48 microns peak-to-valley with OWL whereas it would correspond to 3.9 microns with the 8-m VLT.

Taking into account the telescope size and some implied technology solutions (e.g. optical segmentation), we come to the unsurprising conclusion that the telescope itself should provide the following functions: phasing, field stabilization, and active optics, including active alignment. The case for field stabilization is very strong, as a "closed" co-rotating enclosure would be very costly and anyway inefficient in protecting the telescope from wind.

As pointed out earlier, we consider modern telescopes to be controlled opto-mechanical assemblies. The sheer size of OWL only emphasizes the need for a coherent system approach, with rational trade-offs and compromises between different areas, e.g. optical and structural designs. It is also essential that from the earliest stages the design incorporates integration, maintenance and operation considerations. Besides cost, the two essential reasons are construction schedule and operational reliability, the latter playing a critical role when it comes to telescope efficiency.

Optics

Several designs have been explored, from classical Ritchey-Chrétien to siderostat solutions. The shape of the primary mirror is the focus of a hot discussion in the community. Proponents of aspheric designs invoke the lower number of surfaces an aspheric primary mirror design would imply, and progress of optical fabrication allowing cost-effective production of off-axis aspheric surfaces.

It does however not appear possible to provide the necessary telescope functions with two optical surfaces; field stabilization, in particular, would require a relatively small, low inertia secondary mirror (in the 2 to 3-m range for effective correction with typical wind frequency spectra) and therefore imply horrendous sensitivity to decenters. In order to minimize structure height a small secondary also implies a very fast primary mirror design, thereby exacerbating fabrication and centering issues, and increasing field aberrations. A possible way around these constraints would be to allow a large secondary mirror and to re-image the pupil of the telescope to perform field stabilization with a conveniently sized surface. Unless the secondary mirror would be concave -which implies a longer telescope structure- such solution, however, raises considerable concerns as to the feasibility of this mirror. It also implies a larger number of surfaces, thereby eliminating the prime argument in favor of an aspheric primary mirror design.

The cost argument is particularly interesting, as it shows how much progress has been realized in optical fabrication over the last decade. There is rather consistent agreement that current technology -polishing of warped segments on planetary machines combined with ion-beam finishing- could lead to an increase of polishing costs for aspheric segments by about 50% -down from 300 to 500%- with respect to all-identical, spherical segments. This figure is however incomplete, as it does not take into account more stringent requirements on substrate homogeneity and residual stresses, which would lead to a cost overshoot far exceeding that of the pure figuring activities. Additionally, polishing of warped segments is intrinsically less deterministic hence less adapted to mass-production, and this solution leads to undesirable schedule risks.

Any trade-off must also incorporate mechanical constraints, and in particular the inevitable difficulty to provide high structural rigidity at the level of the secondary mirror. As will become evident later, this aspect has played a crucial role in the selection of the OWL baseline design.

The considerations outlined above point towards spherical primary and secondary mirror solutions. It should be noted that the trade-off is dependent on telescope diameter; cost considerations set aside, aspheric solutions are probably still superior as long as field stabilization does not require pupil re-imaging. The limit is probably in the 20 to 30 meter range, possibly more with active mechanics and suitable shielding from wind, but certainly well below 100-m.

We have selected a 6-mirror configuration11,17 with spherical primary and flat secondary mirrors (figure 7). Spherical and field aberrations are compensated by a 4-mirror corrector, which includes two 8-m class active, aspheric mirrors, a focusing 4.3-m aspheric mirror and a flat tip-tilt conveniently located for field stabilization. Primary-secondary mirror separation is 95-m, down from 136-m of the first design iteration.

The diffraction-limited (Strehl Ratio ³ 80%) field of view in the visible is close to 3 arc minutes and the total field is ~11 arc minutes. The latter, called technical field, provides for guide stars for tracking, active optics, and possibly phasing and adaptive correction with natural guide stars. A laser guide star solution would require a smaller technical field of view (~6-7 arc minutes) and lead to some design simplification.


It should be noted that the optical configuration is quite favorable with respect to mechanical design, as the secondary mirror is flat (hence insensitive to lateral decenters) and as the position and design space for the corrector mechanics permit high structural stiffness at this location. A sensitivity analysis has shown17 that with a fairly simple internal metrology system the telescope could be kept in a configuration where residual alignment errors would be well corrected by active optics.

The primary mirror would be made of ~1600 hexagonal segments, ~2.0-m flat-to-flat i.e. about the maximum size allowed for cost-effective transport in standard containers. No extensive trade-off has been made so far but we rule out very large segments as those would lead to unacceptably high material, figuring, and transport costs and require substantial investment in production facilities in order to comply with a reasonable schedule. There are, indeed, strong engineering arguments in favor of relatively small segments, such as the 1-m ones proposed by Nelson et al at the March 2000 SPIE conference in Munich. A certain relaxation is however possible with spherical segments, as the added complexity implied by the aspheric deviation -which increases quadratically with the aspheric segment size- disappears. Handling and maintenance would also benefit from a reduced segment size, although auxiliary equipment for automated procedures will be mandatory anyway.

The baseline solution for the mirror substrate is glass-ceramics and, according to suppliers, production within 6-8 years would only require duplication of existing production facilities12. A very promising alternative is Silicon Carbide, which would allow a ~75% mass reduction for the primary mirror with a conservatively simple lightweight design, and a mass saving of ~ 4,000 tons for the telescope structure. This technology is, however, not (yet?) demonstrated for mass-production; further studies will have to take place prior to final selection of the mirror technology.

Figuring would require three to four 8-m class polishing (planetary) machines, complemented with one or two 2-m class ion-beam finishing machines. It should be noted that 1-m class, diffraction-limited laser amplifier windows are currently produced13,15 at a rate fully comparable to that needed for OWL.

Phasing of the primary and secondary mirrors relies conservatively on the same solution as the Keck one, i.e. position sensing combined with sensor calibration. An extensive summary of the mirror phasing techniques applied to the Keck telescopes is presented by Chanan19. Calibration is however more complex with OWL as the primary and secondary mirrors must be phased separately. In the worst case scenario, daytime calibration of one of the two mirrors would be required -in practice, interferometric measurements performed on the flat secondary mirror- while the other of the two would be phased on the sky according the scheme described by Chanan. We are also exploring on-sky closed loop phasing techniques, which should provide a more efficient control of phasing errors. Quite a number of on-sky phasing methods have been proposed in the recent past; most are based on curvature sensing or interferometric measurements of one kind or another. These methods are generally sensitive to atmospheric turbulence and require either short exposure or sub-apertures smaller than the atmospheric coherence length, thereby implying use of relatively bright stars -or closing the adaptive loop before the phasing one. The actual limitations are, however, still to be assessed. A particularly attractive method, which should allow to differentiate primary and secondary mirror phasing errors, is the one proposed by Cuevas et al20.

Adaptive optics

Attaining diffraction-limited resolution over a field of view largely exceeding that allowed by conventional adaptive optics is a top priority requirement for OWL. Conservative estimates21 indicate that multi-conjugate adaptive optics22 (MCAO) should allow for a corrected field of view of at least 20 arc seconds in the visible, assuming a set of three adaptive mirrors conjugated to optimized altitudes. There is ongoing debate on the respective merits of a tomographic-oriented correction strategy, followed by the Gemini team, and a layer-oriented one, proposed by Ragazzoni et al. A European Research and Training Network (RTN) has recently been set up, on ESO's initiative, to address the general issue of adaptive optics for extremely large telescopes.

In the visible, the implied characteristics of adaptive modules (about 500,000 active elements on a 100-m telescope, a corresponding wavefront sampling and commensurate computing power) leaves no doubt as to the technological challenge. Novel ideas about wavefront sensing (e.g. pyramidic wavefront sensors) and spectacularly fast progress in cost-effective technologies which could potentially be applied to adaptive mirrors (MEMs or MOEMs), together with the strong pressure to achieve MCAO correction on existing 8-m class telescopes in a very near future, leaves room for cautious optimism. Prototypes are under development -the Observatory of Marseille, in particular, is working towards a ~5,000 active elements to be tested by 2003-2004 and based on a scalable technology.

Extensive discussions of adaptive optics aspects for OWL and extremely large telescopes are presented elsewhere13,21,22,23,24. Proposals for MCAO demonstrators or even functional instruments to be installed within a fairly short time frame on the VLT and Gemini, respectively, have been made. However promising such developments could be, it is impossible, at this stage, to make any substantiated statement as to their outcomes. Therefore, the telescope design incorporates the most conservative assumptions regarding the eventual technology solutions, which implies, in particular, large field of view for reasonable sky coverage with natural guide star. All attempts are made to avoid constraints on the design and correction range of the adaptive modules, which implies that the telescope be able to deliver seeing-limited performance comparable to that of existing large telescopes without relying on adaptive correction.

Mechanics

Several mount solutions have been explored, including de-coupled geometries12 based on fully separate structures for the primary and secondary mirrors. As was -to some extent- expected, the best compromise in terms of cost, performance, and feasibility in a broad sense (i.e. including assembly, integration and maintenance aspects) seems to be an alt-az concept.

As in the case of the main optics, the mechanical design26 relies heavily on standardized modules and parts, allowing cost reduction factors which are normally not attainable with classical telescope designs. Manufactured or pre-assembled parts are constrained to having dimensions compatible with cost-effective transport in standard 40 ft containers. It should be pointed out that, in view of the structure dimensions, the standardization does not necessarily impair performance. Particular attention is given to assembly and integration constraints as well as to suitability for maintenance26.

The all-steel structure has a moving mass on the order of 13,500 tons (including mirrors) and does not rely on advanced materials. Iso-static and hyper-static configurations are being evaluated, the former yielding lower dynamic performance and the latter, slightly higher mass, complexity, and cost. First locked rotor frequency is 1.5 Hz for the iso-static and 2.4 Hz for the hyper-static configurations, respectively. Static deformations require the decenters of the secondary mirror and of the corrector to be compensated, but the relevant tolerances, which are set to guarantee that the on-sky correction loop by active optics can be closed, are not particularly stringent17.

There is no provision for a co-rotating enclosure, the advantage of which being anyway dubious in view of the enormous opening such enclosure would have. Protection against adverse environmental conditions and excessive day-time heating would be ensured by a sliding hangar, whose dimensions may be unusual in astronomy but actually comparable to or lower than those of large movable enclosures built for a variety of applications25. Air conditioning would lead to prohibitive costs and is not foreseen; open air operation and unobstructed air circulation within beams and nodes seem sufficient to guarantee that the structure reaches thermal equilibrium within an acceptably short time. In this respect, it should be noted that OWL structure is, in proportion to size, more than an order of magnitude less massive than that of the VLT.

Open-air operation is evidently a major issue with respect to tracking and, as mentioned before, full protection from the effect of wind is not a realistic option. Hence the need for field stabilization. The latter is provided by a 2.5-m class flat mirror located in a pupil image, and there is reasonable confidence that a bandwith of 5-7 Hz could be achieved with available mirror technology. It should also be noted that active and passive damping systems have not yet been incorporated into the design.

The kinematics of the structure is comparable to that of the VLT telescopes: 3 minutes for 90º elevation range, 12 minutes for 360º azimuth range, maximum centrifugal acceleration not exceeding 0.1 g at any location of the structure, and 1 degree zenithal blind angle. The number of motor segments would be on the order of 200 for elevation and 400 for azimuth. These figures are based on VLT technology and appear very conservative.

The telescope can point towards horizon, which allows to reduce the dimensions of the sliding enclosure and facilitates maintenance of the secondary mirror unit and extraction of the corrector unit along the axis of the telescope. Mirror covers are foreseen; they would consist of four quadrants sliding into the structure when the telescope is pointing towards zenith. One of these covers would be equipped with segments handling systems and in-situ cleaning facilities allowing periodic cleaning of the primary mirror. Figure 8 shows the telescope pointing towards 60o zenithal distance, mirror covers retracted. The sliding enclosure is not figured.

Conclusions

Progress of OWL conceptual design does not reveal any obvious show-stopper. Underlying the feasibility of a 100-m class telescope is the fact that traditional scalability issues, such as the feasibility of the optics, have shifted to entirely new areas, namely mechanics and control. These last are evidently more predictable, and their limitations inevitably exceed those so far applying to conventional telescope design -a size increase by a factor 2 per generation.

A preliminary cost model has been assembled and, to some extent, consolidated. The total capital investment remains within the target maximum of 1,000 million Euros, including contingency. It should be pointed out, however, that some of the most determinant cost positions correspond to subsystems involving mass production (primary optics, structure), an area traditionally terra incognita to telescope designers. The full implication of mass-production of the primary optics, of actuators and sensors, and of the structure may be underestimated. Our cost estimate should therefore be consolidated by industrial studies. Our perception is that current estimates are probably conservative.

There is strong indication that a competitive schedule is possible; the critical path is set by the mechanics, and, in contrast to the situation which prevailed at the time the last generation of 8- to 10-m class telescopes was designed, long-lead items such as the main optics do not require time-consuming technology developments. Whereby achieving technical first light within 8-9 years after project go-ahead would be a challenging objective, flexibility in the subsequent integration phases should allow a start of partial science operation at full resolution within 11 and 12 years in the infrared and in the visible, respectively.

The current schedule calls for a completion of phase A, including demonstration of the principle of multi-conjugate adaptive optics on the VLT, by 2003. As ambitious as such objective may seem, it should be recalled that the design of the OWL observatory relies extensively on proven technologies, bar adaptive optics -an approach which has also been adopted for the CELT project. In this respect, it should be pointed out that technology development for long-lead items (primary mirrors) played a determinant role with the current generation of 8-10-m class telescopes. These specific, highly time-consuming technology developments being largely unnecessary for extremely large telescopes such as CELT and OWL, tighter scheduling may become possible.

Further information and publications about the OWL study are available at http://www.eso.org/owl

Acknowledgements

The concept presented in this article is the result of the work of several people. The authors wish to thank, in particular, Bernard Delabre, Enzo Brunetto, Marco Quattri, Franz Koch, Guy Monnet, Norbert Hubin, Miska le Louarn, Elise Viard and Andrei Tokovinin for their valuable input.

References

  1. M. Mountain, What is beyond the current generation of ground-based 8-m to 10-m class telescopes and the VLT-I ?, SPIE 2871, pp. 597-606, 1996.
  2. A. B. Meinel, An overview of the Technological Possibilities of Future Telescopes, 1978, ESO Conf. Proc. 23, 13.
  3. L. D. Barr, Factors Influencing Selection of a Next Generation Telescope Concept, 1979, Proc. SPIE Vol. 172, 8.
  4. A. Ardeberg, T. Andersen, B. Lindberg, M. Owner-Petersen, T. Korhonen, P. Søndergård, Breaking the 8m Barrier - One Approach for a 25m Class Optical Telescope, ESO Conf. And Workshop Proc. No 42, pp. 75-78, 1992.
  5. T. Andersen, A. Ardeberg, J. Beckers, R. Flicker, A. Gontcharov, N. C. Jessen, E. Mannery, M. Owner-Pertersen, H. Riewaldt, The proposed 50 m Swedish Extremely Large Telescope, 2000, Proceedings Bäckaskog Workshop on Extremely Large Telescopes, p 72.
  6. T. Sebring, F. Bash, F. Ray, L. Ramsey, The Extremely Large Telescope: Further Adventures in Feasibility, SPIE Proc. 3352, p. 792, 1998.
  7. T. Sebring, G. Moretto, F. Bash, F. Ray, L. Ramsey, The Extremely Large Telescope (ELT), A Scientific Opportunity; An Engineering Certainty; 2000, Proceedings Bäckaskog Workshop on Extremely Large Telescopes, p 53
  8. J. E. Nelson, Design concepts for the California extremely large telescope (CELT); 2000, SPIE 4004.
  9. R. Gilmozzi, B. Delabre, P. Dierickx, N. Hubin , F. Koch, G. Monnet, M. Quattri, F. Rigaut, R.N. Wilson, The Future of Filled Aperture Telescopes: is a 100m Feasible?; 1998, Advanced Technology Optical/IR Telescopes VI, SPIE 3352, 778
  10. P. Dierickx, R. Gilmozzi, OWL Concept Overview; 2000, Proceedings Bäckaskog Workshop on Extremely Large Telescopes, p 43
  11. P. Dierickx, J. Beletic, B. Delabre, M. Ferrari, R. Gilmozzi, N. Hubin, The Optics of the OWL 100-M Adaptive Telescope; 2000, Proceedings Bäckaskog Workshop on Extremely Large Telescopes, p97.
  12. E. Brunetto, F. Koch, M. Quattri, OWL: first steps towards designing the mechanical structure; 2000, Proceedings Bäckaskog Workshop on Extremely Large Telescopes, 109.
  13. N. Hubin, M. Le Louarn, New Challenges for Adaptive Optics: The OWL Project; 2000, Proceedings Bäckaskog Workshop on Extremely Large Telescopes, p 202
  14. H. U. Kaufl, G. Monnet, From ISAAC to GOLIATH, or better not!? Infrared instrumentation concepts for 100m class telescopes; 2000, Proceedings Bäckaskog Workshop on Extremely Large Telescopes, p 282
  15. H. F. Morian, Segmented mirrors from SCHOTT GLAS for the ELTs; 2000, Proceedings Bäckaskog Workshop on Extremely Large Telescopes, 249.
  16. R. Geyl, M. Cayrel, Extremely large telescopes - a manufacturer point of view; 2000, Proceedings Bäckaskog Workshop on Extremely Large Telescopes, 237.
  17. P. Dierickx, B. Delabre, L. Noethe, OWL Optical Design, Active Optics and Error Budget, 2000, SPIE 4003.
  18. R. Geyl, M. Cayrel, REOSC approach to ELTs and segmented optics; 2000, SPIE 4003.
  19. G. Chanan, Phasing the primary mirror segments of the Keck telescopes: a comparison of different techniques, 2000, SPIE 4003.
  20. S. Cuevas Cardona, V. G. Orlov, F. Garfias, V. V. Voitsekovich, L. Sanchez, Curvature equation for segmented telescopes, 2000, SPIE 4003.
  21. N. Hubin, M. le Louarn, M. Sarazin, A. Tokovinin, New challenges for adaptive optics: the OWL 100 m telescope, 2000, SPIE 4007.
  22. F. Rigaut, R. Ragazzoni, M. Chun, M. Mountain, Adaptive Optics Challenges for the ELT ; 2000, Proceedings Bäckaskog Workshop on Extremely Large Telescopes, p 168.
  23. R. Ragazzoni, J. Farinato, E. Marchetti, Adaptive optics for 100 m class telescopes: new challenges require new solutions, 2000, SPIE 4007.
  24. R. Ragazzoni, Adaptive optics for giant telescopes: NGS vs. LGS, ; 2000, Proceedings Bäckaskog Workshop on Extremely Large Telescopes, p 175.
  25. M. Quattri, F. Koch, Analyzing the requirements of the enclosure and infrastructures for OWL and elaborating on possible solutions; 2000, SPIE 4004.
  26. E. Brunetto, F. Koch, M. Quattri, OWL: further steps in designing the telescope and in assessing its performances; 2000, SPIE 4004.

Send comments to <pdierick@eso.org>
Last update: 17 Jan 2002
 [Projects and Developments]  [ESO]  [Index]  [Search]  [Help]  [News]