KEYWORDS: Antennas, Software development, Observatories, Optical correlators, Astronomy, Software engineering, Prototyping, Information technology, Solar thermal energy, Control systems
Starting 2009, the ALMA project initiated one of its most exciting phases within construction: the first antenna
from one of the vendors was delivered to the Assembly, Integration and Verification team. With this milestone and
the closure of the ALMA Test Facility in New Mexico, the JAO Computing Group in Chile found itself in the front
line of the project's software deployment and integration effort. Among the group's main responsibilities are the
deployment, configuration and support of the observation systems, in addition to infrastructure administration,
all of which needs to be done in close coordination with the development groups in Europe, North America
and Japan. Software support has been the primary interaction key with the current users (mainly scientists,
operators and hardware engineers), as the software is normally the most visible part of the system.
During this first year of work with the production hardware, three consecutive software releases have been
deployed and commissioned. Also, the first three antennas have been moved to the Array Operations Site, at
5.000 meters elevation, and the complete end-to-end system has been successfully tested. This paper shares the
experience of this 15-people group as part of the construction team at the ALMA site, and working together
with Computing IPT, on the achievements and problems overcomed during this period. It explores the excellent
results of teamwork, and also some of the troubles that such a complex and geographically distributed project
can run into. Finally, it approaches the challenges still to come, with the transition to the ALMA operations
plan.
Code generation helps in smoothing the learning curve of a complex application framework and in reducing the
number of Lines Of Code (LOC) that a developer needs to craft. The ALMA Common Software (ACS) has
adopted code generation in specific areas, but we are now exploiting the more comprehensive approach of Model
Driven code generation to transform directly an UML Model into a full implementation in the ACS framework.
This approach makes it easier for newcomers to grasp the principles of the framework. Moreover, a lower
handcrafted LOC reduces the error rate. Additional benefits achieved by model driven code generation are:
software reuse, implicit application of design patterns and automatic tests generation. A model driven approach
to design makes it also possible using the same model with different frameworks, by generating for different
targets.
The generation framework presented in this paper uses openArchitectureWare1 as the model to text translator.
OpenArchitectureWare provides a powerful functional language that makes this easier to implement the correct
mapping of data types, the main difficulty encountered in the translation process. The output is an ACS
application readily usable by the developer, including the necessary deployment configuration, thus minimizing
any configuration burden during testing. The specific application code is implemented by extending generated
classes. Therefore, generated and manually crafted code are kept apart, simplifying the code generation process
and aiding the developers by keeping a clean logical separation between the two.
Our first results show that code generation improves dramatically the code productivity.
Trending near real-time data is a complex task, specially in distributed environments. This problem was typically
tackled in financial and transaction systems, but it now applies to its utmost in other contexts, such as hardware
monitoring in large-scale projects. Data handling requires subscription to specific data feeds that need to be
implemented avoiding replication, and rate of transmission has to be assured. On the side of the graphical client,
rendering needs to be fast enough so it may be perceived as real-time processing and display.
ALMA Common Software (ACS) provides a software infrastructure for distributed projects which may require
trending large volumes of data. For theses requirements ACS offers a Sampling System, which allows sampling
selected data feeds at different frequencies. Along with this, it provides a graphical tool to plot the collected
information, which needs to perform as well as possible.
Currently there are many graphical libraries available for data trending. This imposes a problem when trying
to choose one: It is necessary to know which has the best performance, and which combination of programming
language and library is the best decision. This document analyzes the performance of different graphical libraries
and languages in order to present the optimal environment when writing or re-factoring an application using
trending technologies in distributed systems. To properly address the complexity of the problem, a specific set of
alternative was pre-selected, including libraries in Java and Python, languages which are part of ACS. A stress
benchmark will be developed in a simulated distributed environment using ACS in order to test the trending
libraries.
KEYWORDS: Observatories, Astronomy, Lanthanum, Internships, Software development, Telescopes, Computing systems, Control systems, Process modeling, Lead
Observatories are not all about exciting new technologies and scientific progress. Some time has to be dedicated
to the future engineers' generations who are going to be on the front line in a few years from now. Over
the past six years, ALMA Computing has been helping to build up and collaborating with a well-organized
engineering students' group at Universidad T´ecnica Federico Santa Maria in Chile. The Computer Systems
Research Group (CSRG) currently has wide collaborations with national and international organizations, mainly
in the astronomical observations field. The overall coordination and technical work is done primarily by students,
working side-by-side with professional engineers. This implies not only using high engineering standards, but
also advanced organization techniques.
This paper aims to present the way this collaboration has built up an own identity, independently of individuals,
starting from its origins: summer internships at international observatories, the open-source community, and
the short and busy student's life. The organizational model and collaboration approaches are presented, which
have been evolving along with the years and the growth of the group. This model is being adopted by other
university groups, and is also catching the attention of other areas inside the ALMA project, as it has produced
an interesting training process for astronomical facilities. Many lessons have been learned by all participants
in this initiative. The results that have been achieved at this point include a large number of projects, funds
sources, publications, collaboration agreements, and a growing history of new engineers, educated under this
model.
The Atacama Large Millimeter Array (ALMA) is a joint project between astronomical organizations in Europe, North
America, and Japan. ALMA will consist of at least 50 twelve meter antennas operating in the millimeter and submillimeter
wavelength range. It will be located at an altitude above 5000m in the Chilean Atacama desert. The ALMA
Test Facility (ATF), located in New Mexico, USA, is a proving ground for development and testing of hardware,
software, commissioning and operational procedure.
At the ATF emphasis has shifted from hardware testing to software and operational functionality. The support of the
varied goals of the ATF requires stable control software and at the same time flexibility for integrating newly developed
features. For this purpose regression testing has been introduced in the form of a semi-automated procedure. This
supplements the established offline testing and focuses on operational functionality as well as verifying that previously
fixed faults did not re-emerge.
The regression tests are carried out on a weekly basis as a compromise between the developers' response- and the
available technical time. The frequent feedback allows the validation of submitted fixes and the prompt detection of sideeffects
and reappearing issues. Results of nine months are presented that show the evolution of test outcomes, supporting
the conclusion that the regression testing helped to improve the speed of convergence towards stable releases at the ATF.
They also provided an opportunity to validate newly developed or re-factored software at an early stage at the test
facility, supporting its eventual integration. Hopefully this regression test procedure will be adapted to commissioning
operations in Chile.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.