The Central Authorization Service (CAS) is used by ALMA and some of its partners (ESO, NRAO) to secure Web applications and provide Single Sign-On. CAS has been in common use throughout academia for quite some time and is well suited for securing so-called "server side" tools – that is, applications taking care of the business logic as well as generating the HTML code for the User Interface (UI). Many Web applications are designed instead with a strong separation between a “single page” UI running in a browser and one or more back-end servers implementing the business logic; the back-ends may serve non-interactive clients, and may send requests to each other as well. Such a fragmented structure does not match CAS’ model very well and challenges system designers to come up with alternatives. This paper describes the CAS protocol and usage, comparing it to alternative authentication and authorization models based on OAuth 2.0 that can overcome the issues CAS raises. It also tries to plot a path forward based on industry standards like OpenID Connect.
The ALMA software is a large collection of modules for implementing all the functionality needed by the observatory's day-to-day operations from proposal preparation to the scientific data delivery. ALMA software subsystems include among many others: array/antenna control, correlator, telescope calibration, submission and processing of science proposals and data archiving.
The implementation of new features and improvements for each software subsystem must be in close coordination with observatory milestones, the need to rapidly respond to operational issues, regular maintenance activities and testing resources available to verify and validate new and improved software capabilities. This paper describes the main issues detected managing all these factors together and the different approaches used by the observatory in the search of an optimal solution.
In this paper, we describe the software delivery process adopted by ALMA during the construction phase and its further evolution in early operations. We also present the acceptance process implemented by the observatory for the validation of the software before it can be used for science observations. We provide details of the main roles and responsibilities during software verification and validation as well as their participation in the process for reviewing and approving changes into the accepted software versions.
Finally, we present ideas on how these processes should evolve in the near future, considering the operational reality of the ALMA observatory as it moves into full operations, and summarize the progress implementing some of these ideas and lessons learnt.
In order to provide ALMA users with a comprehensive view of their observing projects, we developed the ALMA Snooping Project Interface (SnooPI) application. The simple and intuitive interface allows scientists to follow the status of their projects, broken down into observing unit sets and scheduling blocks. The application itself contains two separate parts: a Java back-end server and a JavaScript front-end client application. The application interacts with REST interfaces of other ALMA software components to get the necessary project reports, certain details describing the observations and to access statistics of the user’s ALMA Helpdesk tickets. All this information allows to successfully trace all stages of observations, processing and delivery of the ALMA science projects.
KEYWORDS: Visualization, Observatories, Databases, Space operations, Venus, Statistical analysis, Signal detection, Data acquisition, Data mining, Diagnostics
Data produced by ALMA for the community undergoes a rigorous quality assurance (QA) process, from the initial observation ("QA0") to the final science-ready data products ("QA2"), to the QA feedback given by the Principal Investigators (PIs) when they receive the data products (“QA3”). Calibration data is analyzed to measure the performance of the observatory and predict the trend of its evolution ("QA1").
The procedure develops over different steps and involves several actors across all ALMA locations; it is made possible by the support given by dedicated software tools and a complex database of science data, meta-data and operational parameters. The life-cycle of each involved entity is well-defined, and it prevents for instance that "bad" data (that is, data not meeting the minimum quality standards) is ever processed by the ALMA pipeline. This paper describes ALMA's quality assurance concepts and procedures, including the main enabling software components.
The software for the Atacama Large Millimeter/submillimeter Array (ALMA) that has been developed in a collaboration of ESO, NRAO, NAOJ and the Joint ALMA Observatory for well over a decade is an integrated end-to-end software system of about six million lines of source code. As we enter the third cycle of science observations, we reflect on some of the decisions taken and call out ten topics where we could have taken a different approach at the time, or would take a different approach in today’s environment. We believe that these lessons learned should be helpful as the next generation of large telescope projects move into their construction phases.
At the end of 2012, ALMA software development will be completed. While new releases are still being prepared
following an incremental development process, the ALMA software has been in daily use since 2008. Last year it was
successfully used for the first science observations proposed by and released to the ALMA scientific community. This
included the whole project life cycle from proposal preparation to data delivery, taking advantage of the software being
designed as an end-to-end system. This presentation will report on software management aspects that became relevant in
the last couple of years. These include a new feature driven development cycle, an improved software verification
process, and a more realistic test environment at the observatory. It will also present a forward look at the planned
transition to full operations, given that upgrades, optimizations and maintenance will continue for a long time.
ESO introduced a User Portal for its scientific services in November 2007. Registered users have a central entry point for
the Observatory's offerings, the extent of which depends on the users' roles - see [1]. The project faced and overcame a
number of challenging hurdles between inception and deployment, and ESO learned a number of useful lessons along the
way. The most significant challenges were not only technical in nature; organization and coordination issues took a
significant toll as well. We also indicate the project's roadmap for the future.
The European Organisation for Astronomical Research in the Southern Hemisphere (ESO), headquartered in Garching,
Germany, operates different state-of-the-art observing sites in Chile. To manage observatory operations and observation
transfer, ESO developed an end-to-end Data Flow System, from Phase I proposal preparation to the final archiving of
quality-controlled science, calibration and engineering data. All information pertinent to the data flow is stored in the
central databases at ESO headquarters and replicated to and from the observatory database servers.
In the ESO's data flow model one can distinguish two groups of databases; the front-end databases, which are replicated
from the ESO headquarters to the observing sites, and the back-end databases, where replication is directed from the
observations to the headquarters.
A part of the front-end database contains the Observation Blocks (OBs), which are sequences of operations necessary to
perform an observation, such as instrument setting, target, filter and/or grism ID, exposure time, etc. Observatory
operations rely on fast access to the OB database and quick recovery strategies in case of a database outage.
After several years of operations, those databases have grown considerably. There was a necessity in reviewing the
database architecture to find a solution that support scalability of the operational databases.
We present the newly developed concept of distributing the OBs between two databases, containing operational and
historical information. We present the architectural design in which OBs in operational databases will be archived
periodically at ESO headquarters. This will remedy the scalability problems and keep the size of the operational
databases small. The historical databases will only exist in the headquarters, for archiving purposes.
The European Southern Observatory (ESO) is in the process of creating a central access point for all services offered to its user community via the Web. That gateway, called the User Portal, will provide registered users with a personalized set of service access points, the actual set depending on each user's privileges.
Correspondence between users and ESO will take place by way of "profiles", that is, contact information. Each user may have several active profiles, so that an investigator may choose, for instance, whether their data should be delivered to their own address or to a collaborator.
To application developers, the portal will offer authentication and authorization services, either via database queries or an LDAP server.
The User Portal is being developed as a Web application using Java-based technology, including servlets and JSPs.
A number of tools exist to aid in the preparation of proposals and observations for large ground and space-based observatories (VLT, Gemini, HST being examples). These tools have transformed the way in which astronomers use large telescopes. The ALMA telescope has a strong need for such a tool, but its scientific and technical requirements, and the nature of the telescope, provide some novel challenges. In addition to the common Phase I (Proposal) and Phase II (Observing) preparation the tool must support the needs of the novice alongside the needs of those who are expert in millimetre/sub-millimetre aperture synthesis astronomy. We must also provide support for the reviewing process, and must interface with and use the technical architecture underpinning the design of the ALMA Software System. In this paper we describe our approach to meeting these challenges.
All ESO Science Operations teams operate on Observing Runs, loosely defined as blocks of observing time on a specific instrument. Observing Runs are submitted as part of an Observing Proposal and executed in Service or Visitor Mode. As an Observing Run progresses through its life-cycle, more and more information gets associated to it: Referee reports, feasibility and technical evaluations, constraints, pre-observation data, science and calibration frames, etc. The Manager of Observing Runs project (Moor) will develop a system to collect operational information in a database, offer integrated access to information stored in several independent databases, and allow HTML-based navigation over the whole information set. Some Moor services are also offered as extensions to, or complemented by, existing desktop applications.
KEYWORDS: Interferometry, Telescopes, Interferometers, Calibration, Visibility, Data archive systems, Data processing, Observatories, Signal processing, Space telescopes
In this article we present the Data Flow System (DFS) for the Very Large Telescope Interferometer (VLTI). The Data Flow System is the VLT end-to-end software system for handling astronomical observations from the initial observation proposal phase through the acquisition, processing and control of the astronomical data. The Data Flow system is now in the process of installation and adaptation for the VLT Interferometer. The DFS was first installed for VLTI first fringes utilising the siderostats together with the VINCI instrument and is constantly being upgraded in phase with the VLTI commissioning. When completed the VLT Interferometer will make it possible to coherently combine up to three beams coming from the four VLT 8.2m telescopes as well as from a set of initially three 1.8m Auxiliary Telescopes, using a Delay Line tunnel and four interferometry instruments. Observations of objects with some scientific interest are already being carried out in the framework of the VLTI commissioning using siderostats and the VLT Unit Telescopes, making it possible to test tools under realistic conditions. These tools comprise observation preparation, pipeline processing and further analysis systems. Work is in progress for the commissioning of other VLTI science instruments such as MIDI and AMBER. These are planned for the second half of 2002 and first half of 2003 respectively. The DFS will be especially useful for service observing. This is expected to be an important mode of observation for the VLTI, which is required to cope with numerous observation constraints and the need for observations spread over extended periods of time.
The VLT Data Flow System (DFS) has been developed to maximize the scientific output from the operation of the ESO observatory facilities. From its original conception in the mid 90s till the system now in production at Paranal, at La Silla, at the ESO HQ and externally at home institutes of astronomers, extensive efforts, iteration and retrofitting have been invested in the DFS to maintain a good level of performance and to keep it up to date. In the end what has been obtained is a robust, efficient and reliable 'science support engine', without which it would be difficult, if not impossible, to operate the VLT in a manner as efficient and with such great success as is the case today. Of course, in the end the symbiosis between the VLT Control System (VCS) and the DFS plus the hard work of dedicated development and operational staff, is what made the success of the VLT possible. Although the basic framework of DFS can be considered as 'completed' and that DFS has been in operation for approximately 3 years by now, the implementation of improvements and enhancements is an ongoing process mostly due to the appearance of new requirements. This article describes the origin of such new requirements towards DFS and discusses the challenges that have been faced adapting the DFS to an ever-changing operational environment. Examples of recent, new concepts designed and implemented to make the base part of DFS more generic and flexible are given. Also the general adaptation of the DFS at system level to reduce maintenance costs and increase robustness and reliability and to some extend to keep it conform with industry standards is mentioned. Finally the general infrastructure needed to cope with a changing system is discussed in depth.
The Data Flow System is the VLT end-to-end system for handling astronomical observations from the initial observation proposal phase through the acquisition, processing and control of the astronomical data. The VLT Data Flow System has been in place since the opening of the first VLT Unit Telescope in 1998. When completed the VLT Interferometer will make it possible to coherently combine up to three beams coming from the four VLT 8.2m telescopes as well as from a set of initially three 1.8m Auxiliary Telescopes, using a Delay Line tunnel and four interferometry instruments. The Data Flow system is now in the process of installation and adaptation for the VLT Interferometer. Observation preparation for a multi-telescope system, handling large data volume of several tens of gigabytes per night are among the new challenges offered by this system. This introduction paper presents the VLTI Data Flow system installed during the initial phase of VLTI commissioning. Observation preparation, data archival, and data pipeline processing are addressed.
The Observation Handling Subsystem (OHS) of the ESO VLT Data- Flow System was designed to collect, verify, store and distribute observation preparation information. This rather generic definition includes high-level Observing Proposals submitted once per semester to apply for telescope time (typically referred to as 'Phase I' proposals) as well as detailed descriptions of the observations to be performed (which is often called 'Phase II' data); in the Data-Flow System, such descriptions are defined as Observation Blocks (OBs). Observation queues and long- and short-term schedules are also produced, ranging in scope from an observation semester to a few hours. The OHS is a distributed system composed of a collection of loosely coupled software tools. The tools communicate mostly through a set of relational databases, which are distributed between Garching and The Chilean observatories. A number of communication protocols are also used, from the e-mail based Receiver process of the Proposal Handling and Reporting System to the proprietary protocol used to serve the telescope and instrument control systems. Data and commands flow through the OHS, supporting the operational procedures of ESO's Observing Programmes Committee and of the different operation teams in Garching and in Chile. This paper presents the overall architecture of the OHS, and each module's technical features and underlying operational concepts. It also discusses the current implementation choices and development plans.
KEYWORDS: Space telescopes, Telescopes, Prototyping, Observatories, Software development, Human-machine interfaces, Data archive systems, Virtual colonoscopy, Large telescopes, Data storage
One of the most important design goals of the ESO very large telescope is efficiency of operations, to maximize the scientific productivity of the observatory. 'Service mode' observations will take up a significant fraction of the VLT's time, with the goal of matching the best observing conditions to the most demanding scientific programs. Such an operational scheme requires extensive computer support in the area of observation preparation and execution. In this paper we present some of the software tools developed at ESO to support VLT observers, both staff and external. Our phase II proposal preparation system and the operational toolkit are prototype implementations of the final VLT systems and have been in use for over a year, while the scheduling tools to support 'service mode' operations.
KEYWORDS: Data archive systems, Calibration, Telescopes, Virtual colonoscopy, Control systems, Digital micromirror devices, Data storage, Prototyping, Astronomy, Space telescopes
In order to realize the optimal scientific return from the VLT, ESO has undertaken to develop an end-to-end data flow system from proposal entry to science archive. The VLT Data Flow System (DFS) is being designed and implemented by the ESO Data Management and Operations Division in collaboration with VLT and Instrumentation Divisions. Tests of the DFS started in October 1996 on ESO's New Technology Telescope. Since then, prototypes of the Phase 2 Proposal Entry System, VLT Control System Interface, Data Pipelines, On-line Data Archive, Data Quality Control and Science Archive System have been tested. Several major DFS components have been run under operational conditions since February 1997. This paper describes the current status of the VLT DFS, the technological and operational challenges of such a system and the planing for VLT operations beginning in early 1999.
The data flow system (DFS) for the ESO VLT provides a global system approach to the flow of science related data in the VLT environment. It includes components for preparation and scheduling of observations, archiving of data, pipeline data reduction and quality control. Standardized data structures serve as carriers for the exchange of information units between the DFS subsystems and VLT users and operators. Prototypes of the system were installed and tested at the New Technology Telescope. They helped us to clarify the astronomical requirements and check the new concepts introduced to meet the ambitious goals of the VLT. The experience gained from these tests is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.