Common DFOS tools:
Documentation

dfos = Data Flow Operations System, the common tool set for DFO
*make printable

MUC blades: architecture, support and backup scheme

Hardware:

The architecture and hardware has been selected after extensive and careful tests executed in August 2012.

7 virtual machines (muc01-muc07), 2 Dell M820 with 32 cores each (muc08, muc09), 1 Dell M820 with 48 cores (muc10), 1 Dell M830 with 56 cores (muc11), 1 Dell M830 (muc12). The latter machine is also installed as a virtual machine but 1:1 on the mentioned physical blade.

Data disks: external storage

Memory: muc01-muc07: 64 GB; muc08: 128 GB; muc09, muc10, muc12: 512 GB; muc11: 2048 GB
Home: internal storage, 0.9 TB, cross-mounted from OTS home server
Operating system: 64bit

Configuration:

All servers are called 'muc<nn>' where nn starts with 01
'muc' stands for 'multi-core processing system for QC'.

muc01…muc04: three accounts per UT (low-data volume instruments)
muc05: three accounts VLTI, plus pre-imaging (low-data volume instruments)
muc06, muc07: survey cameras (muc06: ocam, muc07: vircam)
muc08: science processing (phoenix) (accounts sciproc (for UVES), xshooter_ph and giraffe_ph as of 2015-06)
muc09: muse and muse_ph
muc10: muse_ph2
muc11: muse_ph3; espresso
muc12: matisse
per account: 1 internal disk, with dfos software, pipelines etc.
1 external disk, with data and long-term memory
storage: external disk on 'fujisan3' and 'fujisan4' servers

[The super-computer next door is called Super-MUC.]

Note that the assignment of instrument accounts to muc blades refers to the instrument-telescope association in winter 2013. No attempt is made to re-arrange accounts if an instrument is physically moved on Paranal.

Sketch of architecture

The muc01...muc05 servers have three home accounts and three internal data disks mounted.

The muc servers muc06...muc12 have one internal data disk each.

Technology

The muc systems muc01...muc07 have 12 Intel cores, arranged in 2 CPUs with 6 cores each. The cores are 'hyperthreaded', which means they have two 'virtual cores' each. This is why e.g. ganglia reports 24 cores for each muc. The condor setup is very conservative and assigns 8 cores for condor processing, leaving the others for interactive and crontab jobs.

muc08 and muc09 have 32 cores each. muc10 has 48 cores, muc11 and muc12 have 56 cores.

For an overview of the basic parameters of the systems, go to http://qc-ganglia.hq.eso.org/ganglia/?c=qcXX%2BdfoXX&h=muc02.hq.eso.org (replace muc02 by any other muc blade).

Click here for a picture of the 8 muc blades in the data centre, starting at left with muc01 (meanwhile we have 12 mucs). The bigger blade at the very right is muc08.

Accounts.

  muc01 muc02 muc03 muc04 muc05   muc06  muc07
muc08
muc09 muc10 muc11 muc12 muc01 muc02 muc05
accounts crires fors2 kmos giraffe uves xshooter isaac sphere vimos visir hawki naco2 sinfoni amber midi2 gravity pionier ocam vircam sciproc xshooter_ph* giraffe_ph* more...* muse muse_ph* muse_ph2* muse_ph3* espresso matisse fors1 qc_shift preimg
operational no yes yes yes yes yes no yes no yes yes no no no no yes yes yes yes never never never never yes never never never yes yes no never F Vr
*phoenix

Support.

The muc blades fall into the Level A support (operations-critical). Alerts, emails or tickets are investigated by IT (if marked by URGENT: wothin 2 hours, otherwise within 4 working days).

Backup scheme.

The muc blades are all backed up. The homes, which are not on the mucXX machines, are backed up as part of the backups of the otshsr-vip host. This is the data disk back-up (email from IT, 2017-03-27)

muc01:
/dataqc/kmos
/dataqc/fors1
/dataqc/fors2
/dataqc/crires

muc02:
/dataqc/uves
/dataqc/giraffe
/dataqc/xshooter

muc03:
/dataqc/sphere
/dataqc/vimos
/dataqc/isaac
/dataqc/visir

muc04:
/dataqc/naco2
/dataqc/sinfoni
/dataqc/hawki

muc05:
/dataqc/pionier
/dataqc/midi2_and_gravity
/dataqc/amber
/fdataqc/preimg

muc06:
/dataqc/ocam

muc07:
/dataqc/vircam

muc08
/dataqc/sciproc

muc09
/diska
/diskb

muc10
/diska
/diskb

muc11
/diska

Backup strategy for all MUC hosts:
Full on 2nd Wednesday, weekly differential on Wednesdays, daily incremental
      

Last update: April 26, 2021 by rhanusch