KEYWORDS: Clouds, Data archive systems, Data storage, Calibration, Data modeling, Computing systems, Linear regression, Data processing, Interferometry
The amount of astronomical data that needs to be archived, calibrated, and processed continues to increase as telescopes and observing instruments advance. Securing necessary resources to store and process ever-increasing data is an operational challenge. To solve these issues, we conducted a demonstration experiment using ALMA archived data to efficiently utilize a commercial cloud for archive storage and data analysis pipeline processing. In archiving, a hybrid configuration combining on-premise storage and cloud based short-term and long-term storages is cost-effective, considering the trends on the number of data downloads over time since the data was obtained. In the data analysis processing, information on processing time and resource usage, such as memory and CPU core, measured during the pipeline process of approximately 400 observation data sets was analyzed, and a model was created to estimate processing time and the required amount of resources from the observation parameters. Based on the model created, the amount of required resources is predicted based on observation parameters, and an instance with the necessary and sufficient resources for pipeline processing is launched on demand on the cloud. These pipeline processes were completed with resources in a processing time comparable to that of on-premise ones. Since prices, services, computing resources, etc. on commercial cloud are updated frequently, we plan to continue making periodic estimates.
We present an archive system named "Adria", which have been developed and maintained by ALMA project team of National Astronomical Observatory of Japan (NAOJ). Adria aims to store and open to the public various science data. Adria is composed of an object storage to store the observation data, the access control by "ticket", JSON format metadata, JavaScript APIs and html documents. The combination has the advantages of flexibility and solidity, which are important to store various telescope data in the same platform and to be maintained with small cost for a long time. Firstly, we have applied Adria to the observation data (since July 2013) of Nobeyama Radio Observatory (NRO) 45m, and then we have added the data (since June 2019) of Atacama Submillimeter Telescope Experiment (ASTE) to the same platform.
Elasticsearch is one of solutions to monitor and analyze logs. Even with ALMA∗, observation logs are stored and anyone can look into it according to their purpose. For example, Hastings, which is a tool discovers the root cause of the defect, is utilized for ACA Correlator subsystem†. It queries logs to an ALMA Elasticsearch storing operational logs, analyzes specific messages which infer troubles, then outputs a result. Before the ALMA Elasticsearch was deployed, logs should have been collected manually in advance. Now the ALMA Elasticsearch has become available and we’ve known: 1) Elasticsearch can directly configure and access features by using REST API, 2) Logs taken even years ago can also be retrieved easily, 3) Elasticsearch’s major update didn’t cause much loss of time to change Hastings, 4) Python has several methods to manage Elasticsearch so that we can choose a favorite one. Therefore, we thought to apply Elasticsearch to the Subaru telescope‡. Size of Subaru logs are quite large but they are not stored in any database yet and just archived. We created a cluster system with Elasticsearch for the evaluation purpose and found ways to store data in a short time. We estimated the total ingestion time for 20 years of telescope status data to be at most 5 months. Our goal is to find a feasible cause of any defects in near real time, to predict any errors that may occur in near future, and to analyze communication between the telescope and observational equipment to optimize observations.
We will present recent progress on a development of the Python module for Radio Interferometry Imaging with Sparse Modeling (PRIISM) and its application. PRISM is a new imaging tool for radio interferometry based on the sparse modeling approach. PRIISM is aimed at an imaging without subjectivity nor manual intervention as well as a platform to explore the super-resolution imaging. PRIISM integrates a solver routine with data manipulation tools provided by Common Astronomy Software Applications (CASA). As a consequence of this integration, we successfully reconstructed images from the ALMA Science Verification Data. We will present a new imaging mode that is based on the Non-Uniform Fast Fourier Transformation (NUFFT) together with an existing imaging mode using Fast Fourier Transformation (FFT). We will also discuss about an optimization of the procedure depending on the property of the data.
The ALMA telescope has been producing ground-breaking science since 2011, but it is mostly based on technology from the 2000s. In order to keep ALMA competitive in the coming decade, timely updates are necessary in order to further improve the science output of the telescope in the coming decades. In this contribution, we will present the status of the different projects and studies which constitute the contribution of East Asia to the ALMA Development Program, such as the production of band 1 receivers, the development of band 2 receivers optics, and of the ACA spectrometer. We will also update on the different hardware and software studies towards the implementation of the ALMA Development Roadmap and additional opportunities.
ALMA has already produced many impressive and scientifically compelling results. However, continuous technical upgrades and development are key for ALMA to continue to lead astronomical research through the 2020-2030 decade and beyond. The East Asia ALMA development program consists of the execution of short term projects, and the planning and initial studies for longer term developments that are essential for future upgrades. We present an overview of all these ongoing East Asia ALMA development projects and upgrade studies, which aim to maintain and even increase the outstanding scientific impact of ALMA in the near future and over the coming decades.
ALMA has been demonstrating its exceptional capabilities with unprecedented scientific results achieved over the past six years of operation. To keep ALMA as a leading-edge telescope, it is essential to continue technical upgrades and development of new potential. While our future development programs have already achieved remarkable technological breakthroughs at the level of front-end receivers, we are discussing the upgrades of the analog and digital backend and the correlator. We report the required concept design of the interferometric system focused on these sub-systems to realize new science use cases.
ALMA is still a young and evolving observatory with a very active software development group that produces new and updated software components regularly. Yet we are coming to realize that - after well over a decade of development - not only our own software, but also technologies and tools we depend upon, as well as the hardware we interface with, are coming of age. Software obsolescence management is needed, but surprisingly is not something we can just borrow from other observatories, or any other comparable organization. Here we present the challenges, our approaches and some early experiences.
The software for the Atacama Large Millimeter/submillimeter Array (ALMA) that has been developed in a collaboration of ESO, NRAO, NAOJ and the Joint ALMA Observatory for well over a decade is an integrated end-to-end software system of about six million lines of source code. As we enter the third cycle of science observations, we reflect on some of the decisions taken and call out ten topics where we could have taken a different approach at the time, or would take a different approach in today’s environment. We believe that these lessons learned should be helpful as the next generation of large telescope projects move into their construction phases.
At the end of 2012, ALMA software development will be completed. While new releases are still being prepared
following an incremental development process, the ALMA software has been in daily use since 2008. Last year it was
successfully used for the first science observations proposed by and released to the ALMA scientific community. This
included the whole project life cycle from proposal preparation to data delivery, taking advantage of the software being
designed as an end-to-end system. This presentation will report on software management aspects that became relevant in
the last couple of years. These include a new feature driven development cycle, an improved software verification
process, and a more realistic test environment at the observatory. It will also present a forward look at the planned
transition to full operations, given that upgrades, optimizations and maintenance will continue for a long time.
KEYWORDS: Observatories, Software development, Data archive systems, Computing systems, Antennas, Information technology, Control systems, Interfaces, Interferometers, Data processing
The ALMA Software (~ 80% completed) is in daily use at the ALMA Observatory and has been developed as an end-toend
system including: proposal preparation, dynamic scheduling, instrument control, data handling and formatting, data
archiving and retrieval, automatic and manual data processing, and support for observatory operations. This presentation
will expand on some software management aspects, procedures for releases, integrated system testing and deployment in
Chile. The need for a realistic validation environment, now achieved with a two antenna interferometer at the
observatory, and the balance between incremental development and stability of the software (a challenge at the moment)
will be explained.
In order to investigate the physical conditions of ionized gas in galaxies, as well as its kinematics, we have developed the Kyoto tridimensional spectrograph II (3DII). It is a multi-mode instrument designed for Cassegrain focus, including integral field spectrograph (IFS) and Fabry-Perot imager modes. We have designed it compact so
that we can mount it at 2-m class telescopes as well as at 8-m Subaru telescope. We have succeeded in test observations of the 3DII. In the IFS mode the spatial resolution of ~ 0".5 and 0".4 was obtained in 30-minute exposures at University of Hawaii 88-inch (UH88) and Subaru, respectively, in relatively good weather conditions. Each of 37 × 37 microlenses subtends ~ 0".1 in Subaru's
case. This samples well the image size. A wider field of view is emphasized in the case of UH88. Because our micropupil spectroscopy is free from a slit effect, we have reached the accuracy of an order of one tenth of a pixel for deriving velocity fields in terms of velocity center while the full width at half maximum of the instrumental profile corresponds to two pixels. At Subaru we have used a container designed in a collaboration with National Astronomical Observatory, Japan: it fits with a robotic instrument exchanger. The containerincludes two heat exchangers to keep its surface cool and void degrading the image quality. We have established effective observational equences by realizing a software interface with Subaru operating system. ome results from target observations are shown.
The Subaru Telescope has been stably operated with high image quality since common use began in December 2000. We have updated the following items in order to achieve further improvement of observation efficiency, image quality, and tracking. 1. High reflectivity of mirrors. The reflectivity of the primary mirror has been maintained, yielding 84% at 670 nm by regular CO2 cleaning (every two to three weeks). We successfully carried out the silver coating of the Infrared secondary mirror in April 2003 without over-coating. The reflectivity has been maintained at greater 98% at 1,300 nm. 2. Image Quality. Subaru telescope delivers exceptional image quality {a median image size of 0.6 arc-second FWHM in the R-band as taken by Auto-Guider Cameras at all four foci; Prime, Cassegrain, and two Nasmyth. We optimized parameters of the servo control system of the Elevation servo, reducing the amplitude of 3{8 Hz vibration mode of the telescope and improving image quality when using the Adaptive Optics (AO) system. 3. Acquisition Guiding. Dithering time was shortened by updating the control software. The slit viewer camera for HDS and the fiber bundle for FMOS are available for acquisition guiding in addition to Auto-Guider Cameras. 4. New instruments. We are developing a new prime focus unit for FMOS and will start functional tests in 2005. Moreover, we have started to prepare new interfaces and facilities for FMOS and the new 188 element AO natural/laser guide star system. The focus switching time
will be shortened by updating the hardware of the IR and Cassegrain Optical secondary mirrors from September 2004, reducing it to 10 minutes to switch the focus between Cassegrain and Nasmyth foci.
KEYWORDS: Stars, Telescopes, Space telescopes, Calibration, Control systems, Observatories, Astronomy, Reliability, Hubble Space Telescope, Detection and tracking algorithms
Optimization of observation sequences is a function necessary to get high efficiency and reliability of observations. We have implemented scheduling software in the Subaru Telescope observatory software system. The scheduling engine, Spike, developed at STScI is used with some modification for Subaru Telescope. Since the last report at SPIE (Munich, 2000), new functions to Spike are implemented on 1) optimized arrangement of an observation dataset, which consists of a target object and related calibrations, and 2) flexible scheduling of standard stars selected out of a standard star list, which is fed to Spike as a part of observation datasets. Observation datasets with some necessary information, prepared by an observer, are input to the scheduling tools to be converted to Spike Lisp input forms. A schedule created by Spike is inversely converted to Subaru observation commands to be executed with the observation control system. These applications are operable with Web-based display. We present an overall structure of the scheduling tools with some samples of Subaru observation commands of target datasets and a resultant schedule.
More than three years have passed since Subaru Telescope started its Open Use operation. Currently, more than 60% of the total telescope time is spent for scientific observation. Firstly, we define an index to measure how the telescope is effectively used. By using the index, we review the use of the telescope since 2000. Remote observation and queue observation is a long-term goal of Subaru operation because they are believed to be more efficient way to use the telescope and available resource. Control and observation software has been designed and developed to achieve remote observation and queue observation. Currently, about 30% of the telescope time is used as remote observation. We will discuss how much remote observation has contributed to make the use of the telescope effective.
KEYWORDS: Telescopes, Telecommunications, Astronomical telescopes, Computer networks, Human-machine interfaces, Data analysis, Data acquisition, Computing systems, Interfaces, Control systems
We've implemented remote observing function to Subaru telescope Observation Software system (SOSs). Subaru telescope has three observing-sites, i.e., a telescope local-site and two remote observing-sites, Hilo base facility in Hawaii and Mitaka NAOJ headquarter in Japan. Our remote observing system is designed to allow operations not only from one of three observing-sites, but also from more than two sites concurrently or simultaneously. Considering allowance for delay in observing operations and a bandwidth of the network between the telescope-site and the remote observing-sites, three types of interfaces (protocols) have been implemented. In the remote observing mode, we use socket interface for the command and the status communication, vnc for ready-made applications and pop-up windows, and ftp for the actual data transfer. All images taken at the telescope-site are transferred to both of two remote observing-sites immediately after the acquisition to enable the observers' evaluation of the data. We present the current status of remote observations with Subaru telescope.
Faint Object Camera and Spectrograph, FOCAS, is a Cassegrain versatile optical instrument of Subaru telescope. Among various observing modes of FOCAS, the multi-object spectroscopy (MOS) requires dedicated software suite which enables accurate positioning of masks which have over fifty slitlets on faint targets over 6 arcminutes diameter field-of-view (FOV). We have been developing three kinds of software: the image processing software performing combining mosaic CCD images and optics distortion correction, mask designing program (MDP) for the slit arrangement, and pointing offset calculator (POC) for the target acquisition on slits. MDP and POC provide observers a graphical user interface (GUI) for efficient and quick mask designing and target acquisition. Our test has shown that the slit positioning accuracy on targets is about 0.2 arcsec RMS over entire FOV, and is accurate enough for typical observations with 0.4 arcsec slits or wider. We briefly describe our software as well as the pointing accuracy and the required time for the MOS target acquisition with FOCAS.
The Faint Object Camera and Spectrograph, FOCAS, is a Cassegrain
optical instrument of Subaru telescope. For its versatility, FOCAS
has many optical components such as grisms, filters, and polarizers.
They are inserted in the collimated beam section of 451 mm length.
For the large pupil (90 mm in diameter) and the wide field of view
(6 arcmin in diameter) of FOCAS, rigorous efforts were made in
developing, manufacturing and assembling these components.
The resultant performance of the instrument is quite stable
and is almost as high as that expected from the design values.
In the text, overall characteristics of each optical element
is described.
KEYWORDS: Mirrors, Telescopes, Control systems, Observatories, Weather forecasting, Space telescopes, Cooling systems, Fourier transforms, Temperature metrology, Computing systems
Based on the successful numerical weather forecasting performed by collaboration between MKWC and Subaru Telescope, we develop a temperature control system of the primary mirror of the Subaru Telescope. Temperature forecast is accurate 80% in 2 degrees. After to start the operation, the temperature of the primary mirror controlled below 1 degree centigrade compare by the ambient night air temperature in over 70% probability.
The effect of the temperature control for the improvement of the seeing of Subaru telescope seems to be moderately effective.The median of the seeing size of Subaru Telescope on May 2000 to July 2002 is 0.69 arcsec FWHM. We need further investigation whether the improvement is the result of our successful temperature control system of the primary mirror, or the effect of the annual variation of seeing itself. Thus, we need a long period data for verification the effect of the temperature control.
Subaru Telescope has currently achieved the following performances. 1. Image Quality. (1) Subaru Telescope delivers a median image size, evaluated by equipped Auto Guider (AG) cameras, of 0.6-0.7 arcsec FWHM in the R and I-band at all the four foci: Prime (P), Cassegrain (Cs), and tow Nasmyth (Ns). (2) The best image sizes obtained so far are 0.2 arcsecs FWHM without AO in near-infrared (IR), less than 0.1 arcsec FWHM with AO, and 0.3 arcsec FWHM in optical and mid-IR wavelengths. (3) Stable Shack-Hartmann measurement enables one to keep the errors of Zernike coefficients to less than 0.2μm which corresponds to ~0.1 arcsec image size. 2. Tracking and Pointing. (1) Blind pointing accuracy is better than 1 arcsec RMS over most of the sky. (2) Tracking accuracy is better than 0.2 arcsec RMS in 10 minutes. (3) Guiding accuracy is between 0.8 and 0.18 arcsec RMS with 12-18th magnitude guide stars. 3. IR secondary mirror (M2). (1) Chopping performances: typical figures are at 3 Hz, 80% duty cycle with 30-60 arcsec chopping throw. (2) Tip-Tilt performances: Position stability is about 0.030 arcsec RMS for the effective closed-loop bandwidth less than 5 Hz. 4. Others. (1) The reflectivity of the primary mirror has been maintained at higher than 85 and 95% at 670 and 1300 nm wavelengths by regular cleaning with CO2 ice every two to three weeks. (2) The reflectivity of the blue-side image rotator (ImR) at Nasmyth-optical focus was improved after re-coating of mirrors.
We report an infrared all sky cloud monitor operating at Subaru telescope at Mauna Kea, Hawaii. It consists of panoramic optics and a 10 μm infrared imager. Aspheric metal mirrors coated with gold (sapphire over-coated) are used in the panoramic optics, which is similar to the MAGNUM observatory's cloud monitor at Haleakala, Maui. The imager is a commercially available non-cooled bolometer array. The system is waterproof and (almost) maintenance-free. The video signals from the imager are captured, averaged over 50 frames, subtracted clear-sky frame and flat-fielded in two minutes interval. The processed cloud images are transferred to Subaru observational software system (SOSS) and displayed combined with telescope/targets information and also stored to Subaru Telescope data archive system (STARS). The processed images will be opened on Internet web site.
We have developed and are operating an object-oriented data reduction and data analysis system, DASH ( Distributed Analysis Software Hierarchy ), for efficient data processing for SUBARU telescope. In DASH, all information for reducing a set of data is packed into an abstracted object, named as ``Hierarchy''. It contains rules how to search calibration data, reduction procedure to the final result, and also the reduction log. With Hierarchy, DASH works as an automated reduction pipeline platform cooperated with STARS (Subaru Telescope ARchive System).
DASH is implemented on CORBA and Java technology. The portability of these technology enables us to make a subset of the system for a small stand-alone system, SASH. SASH is compatible with DASH and one can continuously reduce and analyze data between DASH and SASH.
In order to achieve an effective operation and research based on data taken by the Subaru Telescope, we installed a satellite storage and analysis system at Mitaka headquarters of Naoj, on March 2002. Data taken by instruments by Subaru Telescope located at the summit of Mauna Kea is transferred to the STN-II system at Hilo base, and satellite system at Mitaka through the OC3 dedicated network link between Hilo and Mitaka. In Japan, an academic research backbone, SuperSINET spans among various universities and institutes with 10Gbps bandwidth at most, and it is easy for astronomers from Japan to access Subaru data through high speed backbone network in Japan. Database on each site, Hilo and Mitaka, are maintained independently, however, all records and history of updating are transferred each other frequently enough to make it possible for recovery in case of any discrepancy among database. Since the round trip time of the light signal between Hawaii and Japan could not be reduced 45msec, we need a special tuning not only for the data transfer between those two node, but also for the remote control sequence.
KEYWORDS: Telescopes, Computing systems, Stars, Software development, Data analysis, Data storage, Data archive systems, Control systems, Astronomy, Data centers
The first generation of STN with 150TB tape library has been utilized by Subaru telescope for the past several years of operation. We are upgrading the storage system to 600TB of capacity based on the Digital Tape Format 2 of SONY Ltd, in March 2002, so called STN-II. The engine is changed from the VPP700 of Fujitsu Ltd., twenty two vector processors are connected by a cross bar network to the cluster of PrimePower2000 of Fujitsu Ltd., which consists of 128 processors each, with 384GB of quasi-shared memory in total. Data management servers and graphical workstations are connected by the Storage Area Network technology. There are two dedicated clusters of workstations for daily development of software for the archive system, STARS, and for the platform of the data analysis pipeline, DASH. These two software components are combined into the Subaru Software Trinity, with the observation control system, SOSS. The STN-II system is the platform to support observation data flow of the Subaru Telescope with the Subaru Software Trinity. Adopting a powerful computation and fast network, a system for real time quality measurement of the observation is planned and quick feedback to the observation parameter will be possible on the system.
In order to operate large telescope, it is crucial to have a good weather forecast especially of the temperature when the telescope begins preparation, i.e., open the dome to introduce new fresh air inside. For this purpose, the Mauna Kea Weather Center (MKWC) has been established in July 1998 by the initiative of Institute of Astronomy, University of Hawaii. The weather forecast is not a simple matter and is difficult in general especially as in the quite unique environment as in the summit of Mauna Kea. MKWC introduced a system of numerical forecasting based on the mesoscale model, version five, so called MM5, was running on the vector parallel super computer VPP700 of Subaru Telescope for past three years. By the introduction of new supercomputer system at Subaru Telescope, we have prepared new programs for the new supercomputer systems. The long term but coarse grid forecast is available through National Center for Environmental Predict (NCEP) every day, and the MKWC system get the result of simulations on coarse grid over the pacific ocean from NCEP, and readjustment of data to the fine grid down to 1km spatial separation at the summit of Mauna Kea, i.e. Telescope sites of Mauna Kea Observatories. Computation begins around 20:00 HST, to end 48 hours forecast around 0100am next morning. Conversion to WWW graphics will finish around 0500am, then, the specialist of MKWC would take into the result of the numerical forecast account, to launch a precious forecast for the all observatories at the summit of Mauna Kea, at 10:00am HST. This is the collaboration among observatories to find a better observation environment.
Subaru Quality Control Trinity consists of SOSS (Subaru Observation Software System), STARS (Subaru Telescope ARchive System), and DASH (Distributed Analysis System Hierarchy), each of which can be operated independently and also cooperatively with Observation Dataset. For the purpose of evaluating the trinity, test observations were made on June 2001 with the instrument SuprimeCam attached onto the prime focus of the Subaru Telescope. We finally confirmed that the trinity works successfully and the concept of our Observation Dataset can be applicable to the quality control purpose.
Faint object camera and spectrograph, FOCAS, is a Cassegrain optical instrument of Subaru telescope. It has a capability of 6 arcmin FOV direct imaging, low resolution spectroscopy, multi-slit spectroscopy as well as polarimetry. Only the imaging mode has been available so far. The overall design, the observing functions, and the preliminary performance verifications of FOCAS will be presented.
KEYWORDS: Stars, Telescopes, Data analysis, Data archive systems, Space telescopes, Observatories, Telecommunications, Astronomy, Data modeling, Spectroscopy
After the first light, Subaru telescope produced about 86,000 frames or 400 giga bytes data during its test observation by the end of February 2000. STARS (Subaru Telescope ARchive System) contains all data and is serving them to the observers. STARS also provides several convenient tool and information such as QLI (Quick Look Image) by the aid of QP (QLI Producer) and QLIS (QLI Server), HDI (HeaDer Information file), and machine readable (on-line) memorandum for observed data, for making users know the rough quality of the data at a glance. QLI file is a FITS file with FITS BINTABLE extension. By the combination of QP and QLIS (our code name is 'GYOJI'), users have data with various size (20 to 200 times smaller than original one) on their needs, and also many extracted information such as mean, maximum and minimum count values, profiles of extracted spectra in multi-slit spectroscopy or echelle spectroscopy data and so on in the original data browser (QLISFITS) written as JAVA2 applet. This functions will also be used for public data archive system in the future. For the convenience of the data analysis, STARS also handles and manages the 'dataset,' which is essential for preparing the necessary data including object and calibration frames used in data analysis by DASH (Distributed Analysis System Hierarchy: platform for data analysis of Subaru Telescope data). This 'dataset' is made at the summit system (SOSS: Subaru Observing Software System) which knows everything about the procedure of the observation performed, and is interpreted by DASH system. In this paper, we will describe the functions which STARS has and how STARS, DASH and SOSS are linked each other for leading the effective scientific and engineering returns.
We established the quality control sequence to realize the efficient observation and the production of homogeneous quality of the data. The flow is observation preparation, execution of the observation procedure, data acquisition, data archiving, data analysis, and feedback to the future observation sequence. They are closely connected with each other by the idea of observation data set. A science object frame would be valid after applying various calibrations to the data. Observation data set rule describes the 'relation' between these data: science frame and science frame, science frame and calibration frame, and calibration frame and calibration frame. 'Relation' means mainly the acquisition order and timing. The observation data set is an assembly of the data related by the observation data set rule. In the data analysis stage, the observation data set rule is used for collecting the data. Various number of data can be collected by modification of the observation data set rule. After evaluation of the analyzed data, we can find the proper observation data set rule. Then the new rule will be fed back to the observation preparation system as the template.
Subaru Telescope of National Astronomical Observatory of Japan is now finishing the commissioning of telescope and instruments at the summit of Mauna Kea, Hawaii. There will be an announcement for open usage in near future. The proposal management system of the Subaru Telescope (PMSS) which accept and retrieve proposals for open use of the Subaru Telescope is now constructed on the Subaru Telescope Network, the super computer system of the Subaru Telescope. The PMSS is developed on the object oriented data model, a Use Case Model, and a prototyping has been completed.
Optimization of observation sequences is a function necessary to get high efficiency and reliability of observations. We are now implementing scheduling software into the observation software system for Subaru telescope. The scheduling engine, SPIKE, developed at STScI is used with some modification for the Subaru telescope. An observation target list prepared by observers is converted to SPIKE Lisp codes. SPIKE output is inversely converted to Subaru commands to be executed with the observation software system. Real-time scheduling is planned to re-schedule observations by judging the weather and satisfaction conditions with the help of observation history. The scheduling software can be also used as support tools for observers to indicate an object good for the next observation.
Faint Object Camera And Spectrograph (FOCAS) is completed and now waiting for a commissioning run on the Subaru Telescope atop Mauna Kea. We have developed a software system that includes the control of FOCAS instruments, Multiple Object Slits (MOS) design, and an analyzing package especially for evaluating performances of FOCAS. The control software system consists of several processes: a network interface process, user interface process, a central control engine process, a command dispatcher process, local control units, and a data acquisition system. These processes are mutually controlled by passing messages of commands and their status each other. The control system is also connected to Subaru Observation Software System to achieve high efficiency and reliability of observations. We have two off-line systems: a MOS design program, MDP, and an analyzing package. The MDP is a utility software to select spectroscopy targets in the field of view of FOCAS easily through its GUI and to design MOS plates efficiently. The designed MOS parameters are sent to a laser cutter to make a desirable MOS plate. A special package enables prompt performance check and evaluation of the FOCAS itself during a commissioning period. We describe the overall structure of FOCAS software with some GUI samples.
KEYWORDS: Computing systems, Data processing, Telescopes, Data analysis, Stars, Distributed computing, Data modeling, Data archive systems, Calibration, Image processing
New framework of data analysis system (DASH) has been developed for the SUBARU Telescope. It is designed using object-oriented methodology and adopted a restaurant model. DASH shares the load of CPU and I/O among distributed heterogeneous computers. The distributed object environment of the system is implemented with JAVA and CORBA. DASH has been evaluated by several prototypings. DASH2000 is the latest version, which will be released as the beta version of data analysis system for the SUBARU Telescope.
Subaru telescope observation control system is composed of several systems such as a telescope control system, an observation supervisor system, a data acquisition system, and a data archival system. Each system consists of several processes to carry out observation operation in cooperating with other processes by passing control messages and by exchanging their status data. All acquired data is registered in database together with related data such as status and log data of the telescope and instruments. Observers and their observation proposals are registered in the control system as a NIS+ user and NIS+ group. User access to the control. system is managed according to the registered operation level. User interface of the control system is described with some samples of screen displays.
Subaru telescope of National Astronomical Observatory of Japan is now under the commissioning phase, and there will be installed seven powerful instruments to produce several tens megabytes of data in each second of observations. The total amount of the storage necessary to keep those data becomes about 20TB per year.Here we introduce a concept of the hierarchical data storage system on the super computer system of Hilo Base Facility of Subaru Telescope. Detailed description of the computer system and performance feature is also presented. The computer system is useful for operation support based on advanced information management database, called Subaru Data Base.
KEYWORDS: Data processing, Prototyping, Data archive systems, Computing systems, Telescopes, Calibration, Data acquisition, Java, Data modeling, Human-machine interfaces
We are developing a data reduction and analysis system DASH for efficient data processing of the SUBARU telescope. We adopted CORBA as a distributed object environment and Java for a user interface in the prototype of DASH. Moreover, we introduced a data reduction procedure cube as a kind of visual procedure script.
KEYWORDS: Telescopes, Data archive systems, Cameras, Video, Databases, Local area networks, Device simulation, Data communications, Multimedia, Data analysis
Subaru Observation Control System has selected Ethernet and FiberChannel as their standard interface to instruments. Every instrument should connect themselves with at least one of the LANs. Regarding the data transfer to Hilo base, the first concern is that no data must be lost during transfer process, whatever troubles may happen on hardware or network. In the hardware, we provide RAID, tape library at the summit and another RAID at the base facility. As the other measure in software, we have the data file management by Subaru Observation Software System, which enables users to track the location of the file. The hardware configuration of the summit simulation system, which is for the instrument test and so on, is presented. The telescope at the summit of Mauna Kea has been connected to the super computer at the base facility via OC12. This high-speed network is used not only data transfer and IP communication, but also for multimedia communication such as video or telephone. The multimedia project is introduced.
KEYWORDS: Calibration, Data analysis, Stars, Telescopes, Data acquisition, Control systems, Feedback control, Control systems design, Temperature metrology, Actinium
An observation data set (OD) has an important role in Subaru Observation Software System in order to connect the observation control system with the data analysis system. OD includes abstract commands of getting both a science object data and its calibration data indispensable to calibration. Acquisition conditions of each calibration data are also defined in the OD. The observation schedule may be optimized and re-arranged using the OD during the observation in scheduling mode. In the manual operation mode, indication of the next observation command may be given through the OD. The OD is used for automated data analysis, such as pipeline processing, in the data analysis system in the base facility in Hilo, Hawaii. Feedback of the control parameters and real-time quality assessment of the acquired data to observation scheduling will be achieved using the supercomputer system at Hilo in a few years.
KEYWORDS: Telescopes, Control systems, Receivers, Process control, Data acquisition, Data analysis, Observatories, Optical instrument design, Consciousness, Telecommunications
An observation with Subaru Telescope is designed to be executed by the central scheduler process. Control commands are abstracted common to all observation instruments so that the observers are free from the consciousness of the difference between many instruments as much as possible. An abstraction command which is described in an observation procedure is expanded to a device dependent command script, and the script is dispatched to the telescope and instruments by referring to the instrument table. Device dependent commands are processed synchronously or asynchronously by checking the status against interlocks. The structure of the scheduler and the instrument table, the flow of commands such as an abstraction command, a device dependent command script, and a device dependent command, their examples and syntax are described.
Control software of FOCAS (faint object camera and spectrograph) is being developed as a prototype of control software for SUBARU observing instruments. The software system consists of several processes; a network interface process, a user interface process, a central control engine process, a command dispatcher process, local control units, and a data acquisition system. Each process is communicated to other processes and controlled by passing messages of commands and its status. A control flow and generalized command messages are defined following the software guideline for the SUBARU instruments. Functionality of each process is presented. Related off-line software for making multi-slit plates and for data analysis is briefly described.
Subaru observation software system (SOSS) provides observers with so called high-level user interface, scheduler and data archival system. This paper presents both software and hardware environment for test, debug and development of instrument controller (OBCP) and SOSS. The environment is composed of instrumentation software toolkit, instrumentation software simulator, telescope simulator and summit simulation computer system.
KEYWORDS: Data acquisition, Control systems, Telescopes, Computing systems, Data archive systems, Control systems design, Local area networks, Data analysis, Databases, Data storage
The control system for the Subaru telescope is designed to consist of distributed workstations, local processors, and data acquisition computers, which are interconnected by control LANs and data LANs. The control software achieves its functionality with message-based communication. Two key processes, a scheduler and a status logger cooperating with any other processes, are designed to perform efficiency and security in observation. Control flow of observation scheduling and functionality of sub-processes which constitutes the scheduler and the status logger are described. For data acquisition from instruments, Subaru control system provides a variety of data highway which enable instruments to transfer data by up to 20 Mbytes/second. Functionality and characteristics of other subsystems which compose the Subaru control system are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.