KEYWORDS: Planets, Stars, Signal to noise ratio, Planetary systems, Signal detection, Exoplanets, Space operations, Systems modeling, Charge-coupled devices, Observatories
The Kepler mission was a National Aeronautics and Space Agency (NASA) Discovery-class mission designed to continuously monitor the brightness of at least 100,000 stars to determine the frequency of Earth-size and larger planets orbiting other stars. Once the Kepler proposal was chosen for a flight opportunity, it was necessary to optimize the design to accomplish the ambitious goals specified in the proposal and still stay within the available resources. To maximize the science return from the mission, a merit function (MF) was constructed that relates the science value (as determined by the PI and the Science Team) to the chosen mission characteristics and to models of the planetary and stellar systems. This MF served several purposes; predicting possible science results of the proposed mission, evaluating the effects of varying the values of the mission parameters to increase the science return or to reduce the mission costs, and supporting quantitative risk assessments. The MF was also valuable for the purposes of advocating the mission by illustrating its expected capability. During later stages of implementation, it was used to keep management informed of the changing mission capability and support rapid design tradeoffs when mission down-sizing was necessary. The MF consisted of models of the stellar environment, assumed exoplanet characteristics and distributions, detection sensitivity to key design parameters, and equations that related the science value to the predicted number and distributions of detected exoplanet. A description of the MF model and representative results are presented. Examples of sensitivity analyses that supported design decisions and risk assessments are provided to illustrate the potential broader utility of this approach to other complex science-driven space missions.
KEYWORDS: Data archive systems, Stars, James Webb Space Telescope, Observatories, Databases, Exoplanets, Data modeling, Cameras, Space telescopes, Planets
The Transiting Exoplanet Survey Satellite (TESS) is an all-sky survey mission designed to discover exoplanets around the nearest and brightest stars. The Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute will serve as the archive for TESS science data. The services provided by MAST for the TESS mission are to store science data and provide an Archive User Interface for data documentation, search, and retrieval. The TESS mission takes advantage of MAST multi-mission architecture to provide a cost-effective archive that allows integration of TESS data with data from other missions.
G. Ricker, R. Vanderspek, J. Winn, S. Seager, Z. Berta-Thompson, A. Levine, J. Villasenor, D. Latham, D. Charbonneau, M. Holman, J. Johnson, D. Sasselov, A. Szentgyorgyi, G. Torres, G. Bakos, T. Brown, J. Christensen-Dalsgaard, H. Kjeldsen, M. Clampin, S. Rinehart, D. Deming, J. Doty, E. Dunham, S. Ida, N. Kawai, B. Sato, J. Jenkins, J. Lissauer, G. Jernigan, L. Kaltenegger, G. Laughlin, D. Lin, P. McCullough, N. Narita, J. Pepper, K. Stassun, S. Udry
KEYWORDS: Exoplanets, Stars, Satellites, Planets, Cameras, Space telescopes, James Webb Space Telescope, Space operations, Charge-coupled devices, Observatories
The Transiting Exoplanet Survey Satellite (TESS) will discover thousands of exoplanets in orbit around the brightest stars in the sky. This first-ever spaceborne all-sky transit survey will identify planets ranging from Earth-sized to gas giants. TESS stars will be far brighter than those surveyed by previous missions; thus, TESS planets will be easier to characterize in follow-up observations. For the first time it will be possible to study the masses, sizes, densities, orbits, and atmospheres of a large cohort of small planets, including a sample of rocky worlds in the habitable zones of their host stars.
Jon Jenkins, Joseph Twicken, Sean McCauliff, Jennifer Campbell, Dwight Sanderfer, David Lung, Masoud Mansouri-Samani, Forrest Girouard, Peter Tenenbaum, Todd Klaus, Jeffrey Smith, Douglas Caldwell, A. Chacon, Christopher Henze, Cory Heiges, David Latham, Edward Morgan, Daryl Swade, Stephen Rinehart, Roland Vanderspek
The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover ∼1,000 small planets with Rp < 4 R⊕ and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).
KEYWORDS: Stars, Planets, Exoplanets, Space operations, Satellites, Cameras, Charge-coupled devices, Space telescopes, James Webb Space Telescope, Observatories
The Transiting Exoplanet Survey Satellite (TESS) will search for planets transiting bright and nearby stars. TESS has been selected by NASA for launch in 2017 as an Astrophysics Explorer mission. The spacecraft will be placed into a highly elliptical 13.7-day orbit around the Earth. During its 2-year mission, TESS will employ four wide-field optical charge-coupled device cameras to monitor at least 200,000 main-sequence dwarf stars with IC≈4−13 for temporary drops in brightness caused by planetary transits. Each star will be observed for an interval ranging from 1 month to 1 year, depending mainly on the star’s ecliptic latitude. The longest observing intervals will be for stars near the ecliptic poles, which are the optimal locations for follow-up observations with the James Webb Space Telescope. Brightness measurements of preselected target stars will be recorded every 2 min, and full frame images will be recorded every 30 min. TESS stars will be 10 to 100 times brighter than those surveyed by the pioneering Kepler mission. This will make TESS planets easier to characterize with follow-up observations. TESS is expected to find more than a thousand planets smaller than Neptune, including dozens that are comparable in size to the Earth. Public data releases will occur every 4 months, inviting immediate community-wide efforts to study the new planets. The TESS legacy will be a catalog of the nearest and brightest stars hosting transiting planets, which will endure as highly favorable targets for detailed investigations.
KEYWORDS: Stars, Planets, Space operations, Cameras, Charge-coupled devices, Space telescopes, Exoplanets, Observatories, James Webb Space Telescope, Sensors
The Transiting Exoplanet Survey Satellite (TESS ) will search for planets transiting bright and nearby stars. TESS has been selected by NASA for launch in 2017 as an Astrophysics Explorer mission. The spacecraft will be placed into a highly elliptical 13.7-day orbit around the Earth. During its two-year mission, TESS will employ four wide-field optical CCD cameras to monitor at least 200,000 main-sequence dwarf stars with IC (approximately less than) 13 for temporary drops in brightness caused by planetary transits. Each star will be observed for an interval ranging from one month to one year, depending mainly on the star's ecliptic latitude. The longest observing intervals will be for stars near the ecliptic poles, which are the optimal locations for follow-up observations with the James Webb Space Telescope. Brightness measurements of preselected target stars will be recorded every 2 min, and full frame images will be recorded every 30 min. TESS stars will be 10-100 times brighter than those surveyed by the pioneering Kepler mission. This will make TESS planets easier to characterize with follow-up observations. TESS is expected to find more than a thousand planets smaller than Neptune, including dozens that are comparable in size to the Earth. Public data releases will occur every four months, inviting immediate community-wide efforts to study the new planets. The TESS legacy will be a catalog of the nearest and brightest stars hosting transiting planets, which will endure as highly favorable targets for detailed investigations.
Kepler vaulted into the heavens on March 7, 2009, initiating NASA’s search for Earth-size planets orbiting Sun-like stars
in the habitable zone, where liquid water could exist on the planetary surface and support alien biology. Never before has
there been a photometer capable of reaching a precision near 20 ppm in 6.5 hours while conducting nearly continuous
and uninterrupted observations for several years. The flood of exquisite photometric data over the last 4 years on
190,000+ stars has provoked a watershed of results. Over 2,700+ candidate planets have been identified of which an
astounding 1171 orbit 467 stars. Over 120+ planets have confirmed or validated and the data have also led to a
resounding revolution in asteroseismology. Recent discoveries include Kepler-62 with 5 planets total of which 2 are in
the habitable zone, and are 1.4 and 1.7 times the radius of the Earth. Designing and building the Kepler photometer and
the software systems that process and analyze the resulting data presented a daunting set of challenges, including how to
manage the large data volume, how to detect miniscule transit signatures against stellar variability and instrumental
effects, and how to review hundreds of diagnostics produced for each of ~20,000 candidate transit signatures. The
challenges continue into flight operations, as the photometer and spacecraft have experienced aging and changes in
hardware performance over the course of time. The success of Kepler sets the stage for TESS, NASA’s next mission to
detect Earth’s closest cousins.
The Kepler spacecraft launched on March 7, 2009, initiating NASA's first search for Earth-size planets orbiting Sun-like
stars. Since launch, Kepler has announced the discovery of 17 exoplanets, including a system of six transiting a Sun-like
star, Kepler-11, and the first confirmed rocky planet, Kepler-10b, with a radius of 1.4 that of Earth. Kepler is proving to
be a cornucopia of discoveries: it has identified over 1200 candidate planets based on the first 120 days of observations,
including 54 that are in or near the habitable zone of their stars, and 68 that are 1.2 Earth radii or smaller. An astounding 408
of these planetary candidates are found in 170 multiple systems, demonstrating the compactness and flatness of planetary
systems composed of small planets. Never before has there been a photometer capable of reaching a precision near 20
ppm in 6.5 hours and capable of conducting nearly continuous and uninterrupted observations for months to years. In
addition to exoplanets, Kepler is providing a wealth of astrophysics, and is revolutionizing the field of asteroseismology.
Designing and building the Kepler photometer and the software systems that process and analyze the resulting data to make
the discoveries presented a daunting set of challenges, including how to manage the large data volume. The challenges
continue into flight operations, as the photometer is sensitive to its thermal environment, complicating the task of detecting
84 ppm drops in brightness corresponding to Earth-size planets transiting Sun-like stars.
The Kepler Mission is designed to detect the 80 parts per million (ppm) signal from an Earth-Sun equivalent
transit. Such precision requires superb instrument stability on time scales up to 2 days and systematic error
removal to better than 20 ppm. The sole scientific instrument is the Photometer, a 0.95 m aperture Schmidt
telescope that feeds the 94.6 million pixel CCD detector array, which contains both Science and Fine Guidance
Sensor (FGS) CCDs. Since Kepler's launch in March 2009, we have been using the commissioning and science
operations data to characterize the instrument and monitor its performance. We find that the in-flight detector
properties of the focal plane, including bias levels, read noise, gain, linearity, saturation, FGS to Science crosstalk,
and video crosstalk between Science CCDs, are essentially unchanged from their pre-launch values. Kepler's
unprecedented sensitivity and stability in space have allowed us to measure both short- and long- term effects from
cosmic rays, see interactions of previously known image artifacts with starlight, and uncover several unexpected
systematics that affect photometric precision. Based on these results, we expect to attain Kepler's planned
photometric precision over 90% of the field of view.
The Kepler mission is designed to detect the transit of Earth-like planets around Sun-like stars by observing
100,000 stellar targets. Developing and testing the Kepler ground-segment processing system, in particular the
data analysis pipeline, requires high-fidelity simulated data. This simulated data is provided by the Kepler Endto-
End Model (ETEM). ETEM simulates the astrophysics of planetary transits and other phenomena, properties
of the Kepler spacecraft and the format of the downlinked data. Major challenges addressed by ETEM include
the rapid production of large amounts of simulated data, extensibility and maintainability.
In order for Kepler to achieve its required <20 PPM photometric precision for magnitude 12 and brighter stars,
instrument-induced variations in the CCD readout bias pattern (our "2D black image"), which are either fixed or slowly
varying in time, must be identified and the corresponding pixels either corrected or removed from further data
processing. The two principle sources of these readout bias variations are crosstalk between the 84 science CCDs and the
4 fine guidance sensor (FGS) CCDs and a high frequency amplifier oscillation on <40% of the CCD readout channels.
The crosstalk produces a synchronous pattern in the 2D black image with time-variation observed in <10% of individual
pixel bias histories. We will describe a method of removing the crosstalk signal using continuously-collected data from
masked and over-clocked image regions (our "collateral data"), and occasionally-collected full-frame images and
reverse-clocked readout signals. We use this same set to detect regions affected by the oscillating amplifiers. The
oscillations manifest as time-varying moiré pattern and rolling bands in the affected channels. Because this effect
reduces the performance in only a small fraction of the array at any given time, we have developed an approach for
flagging suspect data. The flags will provide the necessary means to resolve any potential ambiguity between
instrument-induced variations and real photometric variations in a target time series. We will also evaluate the
effectiveness of these techniques using flight data from background and selected target pixels.
Hema Chandrasekaran, Jon Jenkins, Jie Li, Forrest Girouard, Joseph Twicken, Douglas Caldwell, Christopher Allen, Stephen Bryson, Todd Klaus, Miles Cote, Brett Stroozas, Jennifer Hall, Khadeejah Ibrahim
KEYWORDS: Stars, Space operations, Charge-coupled devices, Photometry, System on a chip, Calibration, Staring arrays, Electronics, Distance measurement, X band
The Kepler spacecraft is in a heliocentric Earth-trailing orbit, continuously observing ~160,000 select stars over ~115
square degrees of sky using its photometer containing 42 highly sensitive CCDs. The science data from these stars,
consisting of ~6 million pixels at 29.4-minute intervals, is downlinked only every ~30 days. Additional low-rate Xband
communications contacts are conducted with the spacecraft twice a week to downlink a small subset of the science
data. This paper describes how we assess and monitor the performance of the photometer and the pointing stability of the
spacecraft using such a sparse data set.
Jie Li, Christopher Allen, Stephen Bryson, Douglas Caldwell, Hema Chandrasekaran, Bruce Clarke, Jay Gunter, Jon Jenkins, Todd Klaus, Elisa Quintana, Peter Tenenbaum, Joseph Twicken, Bill Wohler, Hayley Wu
KEYWORDS: Stars, Photometry, Space operations, Data processing, Charge-coupled devices, Image compression, Motion measurement, System on a chip, Ka band, Detection and tracking algorithms
This paper describes the algorithms of the Photometer Performance Assessment (PPA) software component in the
Kepler Science Operations Center (SOC) Science Processing Pipeline. The PPA performs two tasks: One is to analyze
the health and performance of the Kepler photometer based on the long cadence science data down-linked via Ka band
approximately every 30 days. The second is to determine the attitude of the Kepler spacecraft with high precision at each
long cadence. The PPA component has demonstrated the capability to work effectively with the Kepler flight data.
The Kepler Mission is designed to continuously monitor up to 170,000 stars at a 30-minute cadence for 3.5 years
searching for Earth-size planets. The data are processed at the Science Operations Center at NASA Ames Research
Center. Because of the large volume of data and the memory needed, as well as the CPU-intensive nature of the
analyses, significant computing hardware is required. We have developed generic pipeline framework software that is
used to distribute and synchronize processing across a cluster of CPUs and provide data accountability for the resulting
products. The framework is written in Java and is, therefore, platform-independent. The framework scales from a single,
standalone workstation (for development and research on small data sets) to a full cluster of homogeneous or
heterogeneous hardware with minimal configuration changes. A plug-in architecture provides customized, dynamic
control of the unit of work without the need to modify the framework. Distributed transaction services provide for
atomic storage of pipeline products for a unit of work across a relational database and the custom Kepler DB. Generic
parameter management and data accountability services record parameter values, software versions, and other metadata
used for each pipeline execution. A graphical user interface allows for configuration, execution, and monitoring of
pipelines. The framework was developed for the Kepler Mission based on Kepler requirements, but the framework itself
is generic and could be used for a variety of applications where these features are needed.
KEYWORDS: Data modeling, Databases, Java, MATLAB, Photometry, System on a chip, Associative arrays, Mathematical modeling, Calibration, Instrument modeling
The Kepler Mission photometer is an unusually complex array of CCDs. A large number of time-varying
instrumental and systematic effects must be modeled and removed from the Kepler pixel data to produce light
curves of sufficiently high quality for the mission to be successful in its planet-finding objective. After the launch
of the spacecraft, many of these effects are difficult to remeasure frequently, and various interpolations over a
small number of sample measurements must be used to determine the correct value of a given effect at different
points in time. A library of software modules, called Focal Plane Characterization (FC) Models, is the element
of the Kepler Science Data Pipeline (hereafter "pipeline") that handles this. FC, or products generated by
FC, are used by nearly every element of the SOC processing chain. FC includes Java components: database
persistence classes, operations classes, model classes, and data importers; and MATLAB code: model classes,
interpolation methods, and wrapper functions. These classes, their interactions, and the database tables they
represent, are discussed. This paper describes how these data and the FC software work together to provide the
pipeline with the correct values to remove non-photometric effects caused by the photometer and its electronics
from the Kepler light curves. The interpolation mathematics is reviewed, as well as the special case of the
sky-to-pixel/pixel-to-sky coordinate transformation code, which incorporates a compound model that is unique
in the SOC software.
Todd Klaus, Miles Cote, Sean McCauliff, Forrest Girouard, Bill Wohler, Christopher Allen, Hema Chandrasekaran, Stephen Bryson, Christopher Middour, Douglas Caldwell, Jon Jenkins
The Kepler Science Operations Center (SOC) is responsible for several aspects of the Kepler Mission, including
managing targets, generating onboard data compression tables, monitoring photometer health and status, processing
science data, and exporting Kepler Science Processing Pipeline products to the Multi-mission Archive at Space
Telescope [Science Institute] (MAST). We describe how the pipeline framework software developed for the Kepler
Mission is used to achieve these goals, including development of pipeline configurations for processing science data and
performing other support roles, and development of custom unit-of-work generators for controlling how Kepler data are
partitioned and distributed across the computing cluster. We describe the interface between the Java software that
manages data retrieval and storage for a given unit of work and the MATLAB algorithms that process the data. The data
for each unit of work are packaged into a single file that contains everything needed by the science algorithms, allowing
the files to be used to debug and evolve the algorithms offline.
The Kepler Mission focal plane contains 42 charge-coupled device (CCD) photodetectors. Each CCD is composed
of 2.2 million square pixels, 27 micrometers on a side, arranged in a grid of 2,200 columns by 1,044 rows. The
science goals of the Kepler Mission require that the position of each CCD be determined with an accuracy
of 0.1 pixels, corresponding to 2.7 micrometers or 0.4 seconds of arc, a level which is not achievable through
pre-flight metrology. We describe a technique for determining the CCD positioning using images of the Kepler
field of view (FOV) obtained in flight. The technique uses the fitted centroid row and column positions of 400
pre-selected stars on each CCD to obtain empirical polynomials which relate sky coordinates (right ascension and
declination) to chip coordinates (row and column). The polynomials are in turn evaluated to produce constraints
for a nonlinear model fit which directly determines the model parameters describing the location and orientation
of each CCD. The focal plane geometry characterization algorithm is itself embedded in an iterative process
which determines the focal plane geometry and the Pixel Response Function for each CCD in a self-consistent
manner. In addition to the fully automated calculation, a person-in-the-loop implementation was developed
to allow an initial determination of the geometry in the event of large misalignments, achieving a much looser
capture tolerance for more modest accuracy and reduced automation.
Bruce Clarke, Christopher Allen, Stephen Bryson, Douglas Caldwell, Hema Chandrasekaran, Miles Cote, Forrest Girouard, Jon Jenkins, Todd Klaus, Jie Li, Chris Middour, Sean McCauliff, Elisa Quintana, Peter Tenenbaum, Joseph Twicken, Bill Wohler, Hayley Wu
The Kepler space telescope is designed to detect Earth-like planets around Sun-like stars using transit photometry by
simultaneously observing more than 100,000 stellar targets nearly continuously over a three-and-a-half year period. The
96.4-megapixel focal plane consists of 42 Charge-Coupled Devices (CCD), each containing two 1024 x 1100 pixel
arrays. Since cross-correlations between calibrated pixels are introduced by common calibrations performed on each
CCD, downstream data processing requires access to the calibrated pixel covariance matrix to properly estimate
uncertainties. However, the prohibitively large covariance matrices corresponding to the ~75,000 calibrated pixels per
CCD preclude calculating and storing the covariance in standard lock-step fashion. We present a novel framework used
to implement standard Propagation of Uncertainties (POU) in the Kepler Science Operations Center (SOC) data
processing pipeline. The POU framework captures the variance of the raw pixel data and the kernel of each subsequent
calibration transformation, allowing the full covariance matrix of any subset of calibrated pixels to be recalled on the fly
at any step in the calibration process. Singular Value Decomposition (SVD) is used to compress and filter the raw
uncertainty data as well as any data-dependent kernels. This combination of POU framework and SVD compression
allows the downstream consumer access to the full covariance matrix of any subset of the calibrated pixels which is
traceable to the pixel-level measurement uncertainties, all without having to store, retrieve, and operate on prohibitively
large covariance matrices. We describe the POU framework and SVD compression scheme and its implementation in the
Kepler SOC pipeline.
Hayley Wu, Joseph Twicken, Peter Tenenbaum, Bruce Clarke, Jie Li, Elisa Quintana, Christopher Allen, Hema Chandrasekaran, Jon Jenkins, Douglas Caldwell, Bill Wohler, Forrest Girouard, Sean McCauliff, Miles Cote, Todd Klaus
KEYWORDS: Planets, Stars, Technetium, Binary data, Motion models, System on a chip, Statistical modeling, Charge-coupled devices, Data centers, Statistical analysis
We present an overview of the Data Validation (DV) software component and its context within the Kepler Science
Operations Center (SOC) pipeline and overall Kepler Science mission. The SOC pipeline performs a transiting planet
search on the corrected light curves for over 150,000 targets across the focal plane array. We discuss the DV strategy for
automated validation of Threshold Crossing Events (TCEs) generated in the transiting planet search. For each TCE, a
transiting planet model is fitted to the target light curve. A multiple planet search is conducted by repeating the transiting
planet search on the residual light curve after the model flux has been removed; if an additional detection occurs, a
planet model is fitted to the new TCE. A suite of automated tests are performed after all planet candidates have been
identified. We describe a centroid motion test to determine the significance of the motion of the target photocenter
during transit and to estimate the coordinates of the transit source within the photometric aperture; a series of eclipsing
binary discrimination tests on the parameters of the planet model fits to all transits and the sequences of odd and even
transits; and a statistical bootstrap to assess the likelihood that the TCE would have been generated purely by chance
given the target light curve with all transits removed.
KEYWORDS: Planets, Space operations, Photometry, Detection and tracking algorithms, Commercial off the shelf technology, System on a chip, Charge-coupled devices, Sensors, Error analysis, Data centers
We describe the Presearch Data Conditioning (PDC) software component and its context in the Kepler Science
Operations Center (SOC) Science Processing Pipeline. The primary tasks of this component are to correct systematic and
other errors, remove excess flux due to aperture crowding, and condition the raw flux light curves for over 160,000 long
cadence (~thirty minute) and 512 short cadence (~one minute) stellar targets. Long cadence corrected flux light curves
are subjected to a transiting planet search in a subsequent pipeline module. We discuss science algorithms for long and
short cadence PDC: identification and correction of unexplained (i.e., unrelated to known anomalies) discontinuities;
systematic error correction; and removal of excess flux due to aperture crowding. We discuss the propagation of
uncertainties from raw to corrected flux. Finally, we present examples from Kepler flight data to illustrate PDC
performance. Corrected flux light curves produced by PDC are exported to the Multi-mission Archive at Space
Telescope [Science Institute] (MAST) and are made available to the general public in accordance with the NASA/Kepler
data release policy.
Elisa Quintana, Jon Jenkins, Bruce Clarke, Hema Chandrasekaran, Joseph Twicken, Sean McCauliff, Miles Cote, Todd Klaus, Christopher Allen, Douglas Caldwell, Stephen Bryson
KEYWORDS: Calibration, Charge-coupled devices, Data modeling, Space operations, Electronics, Data processing, Planets, Space telescopes, Stars, System on a chip
We present an overview of the pixel-level calibration of flight data from the Kepler Mission performed within the Kepler
Science Operations Center Science Processing Pipeline. This article describes the calibration (CAL) module, which
operates on original spacecraft data to remove instrument effects and other artifacts that pollute the data. Traditional
CCD data reduction is performed (removal of instrument/detector effects such as bias and dark current), in addition to
pixel-level calibration (correcting for cosmic rays and variations in pixel sensitivity), Kepler-specific corrections
(removing smear signals which result from the lack of a shutter on the photometer and correcting for distortions induced
by the readout electronics), and additional operations that are needed due to the complexity and large volume of flight
data. CAL operates on long (~30 min) and short (~1 min) sampled data, as well as full-frame images, and produces
calibrated pixel flux time series, uncertainties, and other metrics that are used in subsequent Pipeline modules. The raw
and calibrated data are also archived in the Multi-mission Archive at Space Telescope at the Space Telescope Science
Institute for use by the astronomical community.
We describe an algorithm which fits model planetary system parameters to light curves from Kepler Mission
target stars. The algorithm begins by producing an initial model of the system which is used to seed the fit,
with particular emphasis on obtaining good transit timing parameters. An attempt is then made to determine
whether the observed transits are more likely due to a planet or an eclipsing binary. In the event that the transits
are consistent with a transiting planet, an iterative fitting process is initiated: a wavelet-based whitening filter is
used to eliminate stellar variations on timescales long compared to a transit; a robust nonlinear fitter operating
on the whitened light curve produces a new model of the system; and the procedure iterates until convergence
upon a self-consistent whitening filter and planet model. The fitted transits are removed from the light curve and
a search for additional planet candidates is performed upon the residual light curve. The fitted models are used
in additional tests which identify false positive planet detections: multiple planet candidates with near-identical
fitted periods are far more likely to be an eclipsing binary, for example, while target stars in which the model light
curve is correlated with the star centroid position may indicate a background eclipsing binary, and subtraction of
all model planet candidates yields a light curve of pure noise and stellar variability, which can be used to study
the probability that the planet candidates result from statistical fluctuations in the data.
Christopher Middour, Todd Klaus, Jon Jenkins, David Pletcher, Miles Cote, Hema Chandrasekaran, Bill Wohler, Forrest Girouard, Jay Gunter, Kamal Uddin, Christopher Allen, Jennifer Hall, Khadeejah Ibrahim, Bruce Clarke, Jie Li, Sean McCauliff, Elisa Quintana, Jeneen Sommers, Brett Stroozas, Peter Tenenbaum, Joseph Twicken, Hayley Wu, Doug Caldwell, Stephen Bryson, Paresh Bhavsar, Michael Wu, Brian Stamper, Terry Trombly, Christopher Page, Elaine Santiago
KEYWORDS: Space operations, Photometry, System on a chip, Data processing, Stars, Calibration, Charge-coupled devices, Data modeling, Software development, Planets
We give an overview of the operational concepts and architecture of the Kepler Science Processing Pipeline. Designed,
developed, operated, and maintained by the Kepler Science Operations Center (SOC) at NASA Ames Research Center,
the Science Processing Pipeline is a central element of the Kepler Ground Data System. The SOC consists of an office at
Ames Research Center, software development and operations departments, and a data center which hosts the computers
required to perform data analysis. The SOC's charter is to analyze stellar photometric data from the Kepler spacecraft
and report results to the Kepler Science Office for further analysis. We describe how this is accomplished via the Kepler
Science Processing Pipeline, including the hardware infrastructure, scientific algorithms, and operational procedures. We
present the high-performance, parallel computing software modules of the pipeline that perform transit photometry,
pixel-level calibration, systematic error correction, attitude determination, stellar target management, and instrument
characterization. We show how data processing environments are divided to support operational processing and test
needs. We explain the operational timelines for data processing and the data constructs that flow into the Kepler Science
Processing Pipeline.
Stephen Bryson, Jon Jenkins, Todd Klaus, Miles Cote, Elisa Quintana, Jennifer Hall, Khadeejah Ibrahim, Hema Chandrasekaran, Douglas Caldwell, Jeffrey Van Cleve, Michael Haas
KEYWORDS: Stars, Signal to noise ratio, Charge-coupled devices, Space operations, Calibration, Diagnostics, Photometry, Detection and tracking algorithms, Target detection, Data centers
The Kepler mission monitors ~ 165, 000 stellar targets using 42 2200 × 1024 pixel CCDs. Onboard storage
and bandwidth constraints prevent the storage and downlink of all 96 million pixels per 30-minute cadence, so
the Kepler spacecraft downlinks a specified collection of pixels for each target. These pixels are selected by
considering the object brightness, background and the signal-to-noise in each pixel, and maximizing the signal-to-
noise ratio of the target. This paper describes pixel selection, creation of spacecraft apertures that efficiently
capture selected pixels, and aperture assignment to a target. Engineering apertures, short-cadence targets and
custom-specified shapes are discussed.
KEYWORDS: Stars, Planets, Wavelets, Signal processing, Linear filtering, Electronic filtering, Data modeling, Digital filtering, Gaussian filters, Bandpass filters
The Kepler Mission simultaneously measures the brightness of more than 160,000 stars every 29.4 minutes over a 3.5-year
mission to search for transiting planets. Detecting transits is a signal-detection problem where the signal of interest is a
periodic pulse train and the predominant noise source is non-white, non-stationary (1/f) type process of stellar variability.
Many stars also exhibit coherent or quasi-coherent oscillations. The detection algorithm first identifies and removes strong
oscillations followed by an adaptive, wavelet-based matched filter. We discuss how we obtain super-resolution detection
statistics and the effectiveness of the algorithm for Kepler flight data.
KEYWORDS: Calibration, Photometry, Detection and tracking algorithms, Charge-coupled devices, Planets, Data storage, Space operations, System on a chip, Target detection, Digital filtering
We describe the Photometric Analysis (PA) software component and its context in the Kepler Science Operations Center
(SOC) Science Processing Pipeline. The primary tasks of this module are to compute the photometric flux and
photocenters (centroids) for over 160,000 long cadence (~thirty minute) and 512 short cadence (~one minute) stellar
targets from the calibrated pixels in their respective apertures. We discuss science algorithms for long and short cadence
PA: cosmic ray cleaning; background estimation and removal; aperture photometry; and flux-weighted centroiding. We
discuss the end-to-end propagation of uncertainties for the science algorithms. Finally, we present examples of
photometric apertures, raw flux light curves, and centroid time series from Kepler flight data. PA light curves, centroid
time series, and barycentric timestamp corrections are exported to the Multi-mission Archive at Space Telescope
[Science Institute] (MAST) and are made available to the general public in accordance with the NASA/Kepler data
release policy.
KEYWORDS: Stars, Planets, Space operations, Photometry, Charge-coupled devices, Data centers, System on a chip, Aerospace engineering, Space telescopes, Signal to noise ratio
The Kepler Mission is a search for terrestrial planets specifically designed to detect Earth-size planets in the habitable zones of solar-like stars. In addition, the mission has a broad detection capability for a wide range of planetary sizes, planetary orbits and spectral types of stars. The mission is in the midst of the developmental phase with good progress leading to the preliminary design review later this year. Long lead procurements are well under way. An overview in all areas is presented including both the flight system (photometer and spacecraft) and the ground system. Launch is on target for 2007 on a Delta II.
The Kepler Mission is designed to characterize the frequency of Earth-sized planets in the habitable zones of solar-like stars in the solar galactic neighborhood by observing >100,000 main-sequence stars in a >100 square degree field of view (FOV) and seeking evidence of transiting planets. As part of the system engineering effort, we have developed an End-To-End Model (ETEM) of the photometer to better characterize the expected performance of the instrument and to guide us in making design trades. This model incorporates engineering information such as the point spread function, time histories of pointing offsets, operating temperature, quantization noise, the effects of shutterless readout, and read noise. Astrophysical parameters, such as a realistic distribution of stars vs. magnitude for the chosen FOV, zodiacal light, and cosmic ray events are also included. For a given set of design and operating parameters, ETEM generates pixel time series for all pixels of interest for a single CCD channel of the photometer. These time series are then processed to form light curves for the target stars and the impact of various noise sources on the combined differential photometric precision can be determined. This model is of particular value when investigating the effects of noise sources that cannot be easily subjected to direct analysis, such as residual pointing offsets, thermal drift or cosmic ray effects. This version of ETEM features extremely efficient computation times relative to the previous version while maintaining a high degree of fidelity with respect to the realism of the relevant phenomena.
KEYWORDS: Systems engineering, Stars, Performance modeling, Space operations, Photometry, Signal to noise ratio, Data modeling, Data acquisition, Planets, Error analysis
The Kepler mission will launch in 2007 and determine the distribution of earth-size planets (0.5 to 10 earth masses) in the habitable zones (HZs) of solar-like stars. The mission will monitor > 100,000 dwarf stars simultaneously for at least 4 years. Precision differential photometry will be used to detect the periodic signals of transiting planets. Kepler will also support asteroseismology by measuring the pressure-mode (p-mode) oscillations of selected stars. Key mission elements include a spacecraft bus and 0.95meter, wide-field, CCD-based photometer injected into an earth-trailing heliocentric orbit by a 3-stage Delta II launch vehicle as well as a distributed Ground Segment and Follow-up Observing Program. The project is currently preparing for Preliminary Design Review (October 2004) and is proceeding with detailed design and procurement of long-lead components. In order to meet the unprecedented photometric precision requirement and to ensure a statistically significant result, the Kepler mission involves technical challenges in the areas of photometric noise and systematic error reduction, stability, and false-positive rejection. Programmatic and logistical challenges include the collaborative design, modeling, integration, test, and operation of a geographically and functionally distributed project. A very rigorous systems engineering program has evolved to address these challenge. This paper provides an overview of the Kepler systems engineering program, including some examples of our processes and techniques in areas such as requirements synthesis, validation & verification, system robustness design, and end-to-end performance modeling.
KEYWORDS: Planets, Stars, Photometry, Signal to noise ratio, Point spread functions, Planetary systems, Space operations, Data archive systems, Space telescopes, Sun
NASA's Kepler Mission is designed to determine the frequency of Earth-size and larger planets in the habitable zone of solar-like stars. It uses transit photometry from space to determine planet size relative to its star and orbital period. From these measurements, and those of complementary ground-based observations of planet-hosting stars, and from Kepler's third law, the actual size of the planet, its position relative to the habitable zone, and the presence of other planets can be deduced. The Kepler photometer is designed around a 0.95 m aperture wide field-of-view (FOV) Schmidt type telescope with a large array of CCD detectors to continuously monitor 100,000 stars in a single FOV for four years. To detect terrestrial planets, the photometer uses differential relative photometry to obtain a precision of 20 ppm for 12th magnitude stars. The combination of the number of stars that must be monitored to get a statistically significant estimate of the frequency of Earth-size planets, the size of Earth with respect to the Sun, the minimum number of photoelectrons required to recognize the transit signal while maintaining a low false-alarm rate, and the areal density of target stars of differing brightness are all critical to the photometer design.
KEYWORDS: Stars, Planets, Point spread functions, Signal to noise ratio, Photometry, Charge-coupled devices, Computer simulations, Space operations, Data modeling, Quantum efficiency
The objective of the NASA Ames Kepler mission is the detection of extrasolar terrestrial-size planets through transit photometry. In an effort to optimize the Kepler system design, Ball Aerospace has developed a numerical photometer model to simulate the sensor as well as stars and hypothetical planetary transits. The model emulates the temporal behavior of the incident light from 100 stars (with various visual magnitudes) on one CCD of the Kepler focal plane array. Simulated transits are inserted into the light curves of the stars for transit detection signal-to-noise ratio analyses. The Kepler photometer model simulates all significant CCD characteristics such as dark current, shot noise, read out noise, residual non-uniformity, intrapixel gain variation, charge spill over, well capacity, spectral response, charge transfer efficiency, read out smearing, and others. The noise effects resulting from background stars are also considered. The optical system is also simulated to accurately estimate system optical point spread functions and optical attenuation. In addition, spacecraft pointing and jitter are incorporated. The model includes on-board processing effects such as analog-to-digital conversion, photometric aperture extraction, and 15-minute frame co-addition. Results from the model exhibit good agreement with NASA Ames lab data and are used in subsequent signal-to-noise ratio analyses to assess the transit detection capability. The reported simulations are run using system requirements rather than predicted performance to guarantee that mission science objectives can be attained. The Kepler Photometer Model has given substantial insight into the Kepler system design by offering a straightforward means of assessing system design impacts on the ability to detect planetary transits. It is used as one of the various tools for the establishment of system requirements to ensure mission success.
KEYWORDS: Stars, Photometry, Signal to noise ratio, Space operations, Charge-coupled devices, Point spread functions, Numerical simulations, Planets, Image processing, Control systems
We have performed end-to-end laboratory and numerical simulations to demonstrate the capability of differential photometry under realistic operating conditions to detect transits of Earth-sized planets orbiting solar-like stars. Data acquisition and processing were conducted using the same methods planned for the proposed Kepler Mission. These included performing aperture photometry on large-format CCD images of an artificial star fields obtained without a shutter at a readout rate of 1 megapixel/sec, detecting and removing cosmic rays from individual exposures and making the necessary corrections for nonlinearity and shutterless operation in the absence of darks. We will discuss the image processing tasks performed `on-board' the simulated spacecraft, which yielded raw photometry and ancillary data used to monitor and correct for systematic effects, and the data processing and analysis tasks conducted to obtain lightcurves from the raw data and characterize the detectability of transits. The laboratory results are discussed along with the results of a numerical simulation carried out in parallel with the laboratory simulation. These two simulations demonstrate that a system-level differential photometric precision of 10-5 on five- hour intervals can be achieved under realistic conditions.
KEYWORDS: Stars, Charge-coupled devices, Planets, Photometry, Point spread functions, Space operations, Signal to noise ratio, Interference (communication), Cameras, Received signal strength
The thirty or so extrasolar planets that have been discovered to date are all about as large as Jupiter or larger. Finding Earth-size planets is a substantially more difficult task. We propose the use of spacebased differential photometry to detect the periodic changes in brightness of several hours duration caused by planets transiting their parent stars. The change in brightness for a Sun-Earth analog transit is 8 X 10-5. We describe the instrument and mission concepts that will monitor 100,000 main-sequence stars and detect on the order of 500 Earth-size planets, if terrestrial planets are common in the extended solar neighborhood.
KEYWORDS: Planets, Stars, Photometry, Space operations, Charge-coupled devices, Exoplanets, Analog electronics, Sun, Signal to noise ratio, Planetary systems
With the detection of giant extrasolar planets and the quest for life on Mars, there is heightened interset in finding earth-class planets, those that are less than ten earth masses and might be life supporting. A space-based photometer has the ability to detect the periodic transits of earth-class planets for a wide variety of spectral types of stars. From the data and known type of host star, the orbital semi-major axis, size and characteristic temperature of each planet can be calculated. The frequency of planet formation with respect to spectral type and occurrence for both singular and multiple-stellar systems can be determined. A description is presented of a one-meter aperture photometer with a twelve-degree field of view and a focal plane of 21 CCDs. The photometer woudl continuously and simultaneously monitor 160,000 stars of visual magnitude <EQ 14. Its one-sigma system sensitivity for a transit of a 12th magnitude solar-like star by a planet of one-earth radius would be one part in 50,000. It is anticipated that about 480 earth-class planets would be detected along with 140 giant planets in transit and 1400 giant planets by reflected light. Densities could be derived for about seven case where the planet is seen in transit and radial velocities are measurable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.