eclipse is a library written in ANSI C, offering services dedicated to astronomical data processing. There are services to read/write FITS files, to perform image processing, 3d filtering, compute photometry, image quality, statistics, manipulate image files, etc. These services are all accessible to the C programmer directly, the documentation is contained within the source code in the header files. If you want to use eclipse as a base for your software development, it is recommended to read the eclipse developer's manual, available from the eclipse home page. Notice that outside developments using the eclipse library are not supported by ESO as such, you are on your own.
Most eclipse users however, are not programmers. They use the services offered by the library through Unix commands provided in the Web distribution of this software. The number of Unix commands and their roles have been changing over time. Some commands have merged, some have been split into several commands, the software has evolved quite a lot since its beginning in 1995.
The good thing about the eclipse Unix commands, is that they are very lightweighted. To divide an image by another, no need to startup a data reduction engine, convert the images to a local format or whatever. In a single command-line you can issue the command and get the result back in FITS format too, ready to interact with other image files. The learning curve is the same as for Unix commands, i.e. see it once and know how to find the man page, that is all you need.
Unix commands are also conveniently used for scripting in readily any scripting language. You can build complex data processing programs by scripting eclipse commands from the Unix shell, Python, Tcl/Tk, Perl,...any virtually kind of program that can launch Unix commands.
It is generally a good idea to do so for prototyping phases of a data reduction project, for one-shot tasks, and in general every time you do not want to get into programming, but just want the data reduced. Programming a scripting language is learned quickly and developped at lightspeed. However, it is a very bad idea to plan on scripts to perform heavy data reduction on a daily basis. The reason is simple: if you have to call at least 2 Unix commands to perform your processing, you need to send the data back to the disk between the 2 command calls, because the inputs to Unix commands are file names. If you have lots of data, that translates into heavy I/O which becomes then an obvious bottleneck for your process. For such cases, you would certainly prefer to use the services as offered at the C level and program your processes in C directly.
For example, the Adonis data reduction does not handle enough data to flood a disk and a CPU. To reduce one night of data, a good workstation needs no more than a few hours to apply the basic calibrations. Making scripts for that purpose is ideal: they are developped quickly and since efficiency is not at stake, they do the job quite nicely.
On the other hand, ISAAC/SOFI are dealing with batches of data reaching a gigabyte in size, which means that efficient memory handling becomes a necessity, and avoiding disk I/O is crucial. Such efficiency can only be reached by dealing with the data at the C level, thus a list of Unix commands dedicated for ISAAC/SOFI data reduction only. These commands are called recipes in the DataFlow naming scheme.
Last, eclipse offers some services on the command-line for data reduction, and some commands do basic data analysis. But this software is in no way a complete data analysis environment as MIDAS, IRAF, or IDL offer. If you are looking for an environment to interact with your data, you will be disappointed by the little (if any) interactions offered by eclipse commands. This library is aimed at pipeline data processing, i.e. number crunching following a list of user-defined parameters, without user interaction. eclipse is a data reduction engine, not a data analysis facility.