We present an algorithm, effective over a broad range of planet populations and instruments, for optimizing integration times of an exoplanet direct imaging observation schedule, to maximize the number of unique exoplanet detections under realistic mission constraints. Our planning process uses “completeness” as a reward metric and the nonlinear combination of optimal integration time per target and constant overhead time per target as a cost metric constrained by a total mission time. We validate our planned target list and integration times for a specific telescope by running a Monte Carlo of full mission simulations using EXOSIMS, a code base for simulating telescope survey missions. These simulations encapsulate dynamic details such as time-varying local zodiacal light for each star, planet keep-out regions, exoplanet positions, and strict enforcement of observatory use over time. We test our methods on the Wide-Field Infrared Survey Telescope (WFIRST) coronagraphic instrument (CGI). We find that planet, Sun, and solar panel keep-out regions limit some target per-annum visibility to <28 % and that the mean local zodiacal light flux for optimally scheduled observations is 22.79 mag arcsec − 2. Both these values are more pessimistic than previous approximations and impact the simulated mission yield. We find that the WFIRST CGI detects 5.48 ± 0.17 and 16.26 ± 0.51 exoplanets, on average, when observing two different planet populations based on Kepler Q1-Q6 data and the full Kepler data release, respectively. Optimizing our planned observations using completeness derived from the more pessimistic planet population (in terms of overall planet occurrence rates) results in a more robust yield than optimization based on the more optimistic planet population. We also find optimization based on the more pessimistic population results in more small planet detections than optimization with the more optimistic population. |
1.IntroductionThe 2010 astronomy and astrophysics decadal survey highly prioritized exoplanet bulk population statistics and inventorying planets around nearby stars (within 30 pc).1 The Wide-Field Infrared Survey Telescope (WFIRST),2 prioritized by the 2010 decadal survey, will include a coronagraphic instrument (CGI) capable of directly imaging and detecting new exoplanets unobservable by modern radial velocity or transit techniques. The expected performance of CGI in blind search surveys can be estimated through probabilistic methods using completeness.3,4 An alternative method is to execute a Monte Carlo of full survey simulations on simulated universes. This process creates an ensemble of design reference missions (DRMs) containing a list of target stars observed, the integration time used for each star, when the simulated observations occurred, and the simulated outcome of each observation. Such a collection of DRMs, produced by our method, effectually certifies the ability of the instrument to make the expected number of detection observations claimed in a probabilistic evaluation such as completeness. It is important to note that both methods are still equally limited in their overall prediction accuracy by the assumptions made about the true population of exoplanets to be discovered. Detailed DRMs enable requirement definition and design iteration optimization for future telescopes, including the large-scale mission concepts under development by science and technology definition teams for NASA’s 2020 decadal survey. Both the Habitable Exoplanet Observatory (HabEx)5 and the Large UV-Optical-Infrared Surveyor (LUVOIR)6 mission concepts contain a significant exoplanet direct imaging component with HabEx reserving 1.95 years for coronagraph science operations and LUVOIR reserving 50% of the total mission time for exoplanet science. While target revisits and spectral characterizations could represent a substantial portion of the executed mission, we purposefully omit optimization with revisits or characterizations due to the complexity of that problem. We do include the rare spectral characterization in our WFIRST mission simulations. In our optimization process, we focus solely on delivering an estimate of the maximum possible number of uniquely detected exoplanets under realistic mission constraints by evaluating the number of detections made through single-visit observations of stars, henceforth referred to as yield. We leave the inclusion of revisits, orbit characterizations, and spectral characterizations in full survey optimization for future work. Brown, in Ref. 3, used single-visit completeness to estimate the number of extrasolar planets potentially discoverable with the terrestrial planet finder, an earlier direct imaging mission concept. Single-visit completeness, hereafter referred to as completeness, is the probability of detecting a planet, drawn from a particular population, using a particular instrument, should one exist about a given target star.3,4,7 When completeness is evaluated for each star in a full target list, summed, and scaled by the expected number of planets from that population per star (), we arrive at the expected number of planets to be detected from that population by that mission.8 While this technique can be used to quickly evaluate a mission’s performance, it abrogates temporal constraints and uncertainties, such as target visibility, variable overhead times, changing local zodiacal light intensity, and unscheduled characterizations of newly detected planets. Solely using completeness to evaluate a mission can therefore only provide an upper bound for expected performance. Completeness has previously been used as a reward metric for multiple observation integration time optimizations. Brown demonstrated a method for finding the group (difference in brightness between the planet and star in magnitudes) and number of target trade-off point by optimizing a target list subset against the number of targets in that subset assuming different fixed mission times.3 While this method approximates the reward gradient for achieving a specific group , it overlooks the gain made by customizing for each individual star, . Hunyadi et al., in Ref. 9, advanced Brown’s work by maximizing summed completeness over all targets assuming a fixed mission time and using integration times as the decision variables. In this new approach, star integration times are optimized to equivalent slopes beyond the completeness versus log integration time inflection point and the highest completeness per integration time of these targets is then selected. To practically achieve this, the authors of that study discretized integration times into 1 h increments and calculated completeness values for each integration time. Their final target list contained the set of highest completeness per integration time targets. Hunyadi et al. specifically investigated Earth-analog planets in the habitable zone (as also done in Ref. 3), but also explored Jupiter and Saturn-analogs. Alternatively, a limiting search observation, as defined by Brown,10 would observe a target for the fixed exposure time sufficient to achieve the system’s limiting planet–star magnitude, , for each target. The creation of the final observation schedule in Ref. 10 involved selecting the subset of targets with precalculated integration times and overhead time per observation that fit within the total observing time. While Brown implemented target revisits, which can improve yield, the use of precalculated makes the resulting target list suboptimal. While optimization of integration times and inclusion of revisits mark improvements in yield and target list planning realism, the omission of overhead times and discretization of integration times limits the ability to practically implement the desired observations within a finite span of time and under dynamic mission constraints. A refinement of Brown and Hunyadi’s work—altruistic yield optimization (AYO)4—uses completeness versus integration time as a figure of merit as in Ref. 10 to incrementally sacrifice stars from a target list and reallocate the integration time, , from star to the largest star in increments of . At its core, this represents a form of “greedy optimization” which incrementally converges each observation to a constant slope similar to that described in Ref. 9. This method does not allow for the reintroduction of targets stars to the target list, a necessity of a dynamically evolving mission schedule. Finally, the original AYO method does not fully account for overhead times in the calculation of integration times but rather states that the addition of time can occur after the fact and the use of a finite parameter results in a loss of potential completeness. None of these yield optimization processes use optimization with continuous integration times or test the ability to schedule observations via full mission simulations accounting mission elapsed time (MET). Brown stressed that Monte Carlo simulations of the mission as a whole should be used to produce confidence in the proposed mission’s integrity.3 Brown’s work seeded the founding pillars of our well book-kept full mission survey simulator to include:
The EXOSIMS11 code base was specifically developed to book-keep these parameters across MET. In EXOSIMS and this paper, we account for the locations of individual exoplanets around target stars versus MET, the tracking of our observatory on a nominal L2 Halo orbit12 versus MET, solar system body locations versus MET from ephemeridies, keep-out occlusion of target stars versus MET, changes in local zodiacal light intensity versus MET, and possesses the capability to account for cordoned off observing blocks reserved for other instruments at varying MET, and portion of mission life reserved for observatory science. This paper describes our process for producing a set of planned observations maximizing unique exoplanet detection yield and subsequently validating this prediction under realistic mission conditions. The observation planning process incorporates a combination of filters applied to a planet population and star catalog, described in Secs. 2.1 and 2.2, completeness calculations detailed in Sec. 2.3, and our optimization algorithms with binary-integer programming (BIP)13 and sequential least squares quadratic programming (SLSQP),14 discussed in Sec. 2.4. Our validation process is outlined in Sec. 2.5 where we discuss the framework of EXOSIMS11,15–17 survey simulation as well as incorporation of time-varying keep-out regions (Sec. 2.5.1), time-varying local zodiacal light noise (Sec. 2.5.2), and the convergence of our Monte Carlo simulation (Sec. 2.5.4). We then show practical application on WFIRST in Sec. 3, where we discuss attributes of the observing plan and a comparison of planet populations input versus instrument capabilities versus detected planet population. We also include instrument and fit file parameters 18 in Sec. 7 and a full optimized target list in Sec. 8. 2.Methods2.1.Planet PopulationsTo calculate completeness for each target for a specific instrument, we first generate a joint probability density function of and planet–star separation projected onto the image plane, , using Monte Carlo as in Ref. 3 for an assumed planet population. We use two planet populations in this paper; one derived from Kepler’s detections from Q1-Q6 data19 (Kepler-like)20 and another derived from NASA’s Exoplanet Program Analysis Group (ExoPAG) Study Analysis Group 13 (SAG13).21,22 The Kepler-like and SAG13 populations both share the same eccentricity and albedo distributions but differ in their occurrence rates, semimajor axis, and planetary radii distributions. The final output necessary for calculating completeness is the joint probability distribution of and which is found by sampling the respective planet population and fitting a rectilinear bivariate spline. We use the same sampling methods and distributions for both calculating the joint probability distributions and generating planets in our simulated universes. 2.1.1.Kepler-like planetary semimajor axis and radiiFor the Kepler-like planet population, we adopt a power-law distribution for semimajor axis () modified from the power law fit in Ref. 23 to be of the form where is adopted from Ref. 24. In this model, we include an exponential decline in semimajor axis past a “semimajor axis knee” (). This “knee” is motivated by a lack of giant planet discoveries by the Gemini Planet Imager, which indicates the RV-based power-law planet occurrence rate from Ref. 23 approaches 0 between 10 and 100 AU.25 Using giant planet radial velocity data, Ref. 26 fit many period break points to the giant planet occurrence rate power-law model. We assume an intermediate knee equivalent to AU. The normalization factor is given by integrating the non-normalized distribution over a specific range where we consider values of range in AU to AU, again based on the paucity of wide-separation planets discovered to date. The per-simulation average distribution of is shown in the histogram above Fig. 1(a). We note that for WFIRST and this planet population. Since the closest target list star has distance pc and has the smallest observable planet star separation , then WFIRST’s inner working angle (IWA) of 0.15 arcsec means the smallest planet–star separation observable by WFIRST is 0.394 AU. By assuming a maximum planet eccentricity , the smallest observable semimajor axis from .We base our Kepler-like planetary radii () based on Fig. 7 in Ref. 19. We define the bin counts in Fig. 7 of Ref. 19 as . These bins range from to and are based solely on data from planets with periods days, but days. To properly tune the overall Kepler-like planet occurrence rates () starting with , we first use Eq. (2) evaluated at to to get . Here, and are the semimajor axes corresponding to periods of 0.8 and 85 days around a Sun mass star. We then scale by Eq. (2) and to get the adjusted planetary radii occurrence rates: We multiply the last five bins in by 2.5 to account for longer orbital baseline data and more closely match the larger period orbit distributions available from radial velocity surveys at the time when this distribution was first derived.23,27 Even with this multiplication, the right-hand side histogram of Fig. 1(a) shows the characteristic drop-off in planetary radius clearly observable in Fig. 7 of Ref. 19. The Kepler-like planet occurrence rate per star is over the specific and ranges discussed in this section.When we simulate a universe of Kepler-like planets, we first calculate the number of planets to generate for each planet radius bin () for all bins in . This is given as In this paper, is the number of planets to generate. In our Monte Carlo completeness calculations, we generate planets. We take samples from a log-uniform distribution over the ’th bin range in to get a set of planets radii ().19 Finally, we randomly select radii from the collective nine sets of planetary radii to get . At the same time, we use an inverse transform sampler to generate based on Eq. (1).2.1.2.SAG13 planetary radii and semimajor axisThe SAG13 planet population represents a significant update on the Kepler-like population, incorporating all available Kepler data circa 2017. The occurrence rate model presented in Ref. 28 is a power-law model fit as a function of and but does not include an occurrence rate turnover indicated in giant planet occurrence rates from radial velocity studies26 and the lack of detections of large planets by subsequent Gemini Planet Imager survey results.25 We need an occurrence rate model that is in terms of linear Keplerian orbital elements and decreases occurrence rates for large planets so our extrapolation of the model beyond the bounds in Ref. 28 does not produce substantially more large planets than expected. We include a full derivation of the SAG13 occurrence rate model implemented22,29 in EXOSIMS in Sec. 7.8 but summarize the steps, important components, and results here. We start with the original occurrence rate model fit from Ref. 28 included as Eq. (37) and convert from to resulting in Eq. (38). We reconstructed the occurrence rate grid from the linear model in Fig. 2(a). When directly compared to the occurrence rate grid for G-type stars in Ref. 28, we see a maximum deviation of 0.74 ( planets per bin occurrence) and maximum percent deviation of 13%. We then convert Eq. (38) from period () space to space using Eqs. (39) and (40) assuming a solar mass star to arrive at Eq. (41). We compare the analytical integral over space of this new occurrence rate model to arrive at Fig. 2(b) which shows identical occurrence rates in each of their counterparts’ grid spaces in Fig. 2(a). We now append a semimajor axis “knee” to decay occurrence rates of larger planets to 0, which results in Eq. (42). We calculate the overall SAG13 exoplanet occurrence rate per planet () by performing the double integral over the entire and space as in Eq. (43). The SAG13 model parameters in Table 1 are split at , so we use the notation for and for . An intermediate simplification of the double integral in the calculation of is where , , and are constants from Table 1. The in the denominator and to exponent are a result of the indefinite integral over . The four terms are the result of evaluating the indefinite integral over to to . Finally, is an intermediate calculation representing the marginalization of Eq. (42) over written as Here, is the gravitational parameter assuming a solar mass star. We use the same AU in the SAG13 planet population that we use for the Kepler-like planet population. The average number of planets generated per star in the EXOSIMS implemented SAG13 universe over the range specified is .Table 1Parametric fit parameters for the SAG13 planet population implemented in EXOSIMS.28
In order to generate planets with and from the occurrence rate model, we still need a joint probability density function as well as a single variable probability density function and conditional probability density function. By normalizing Eq. (42) by , we get the joint probability distribution in Eq. (44). From here, we calculate the probability distribution of for the SAG13 planet population by marginalizing Eq. (44) over only. This gives us the planetary radius distribution This probability density function inherits the parametric conditions of the joint probability density function. Since is a marginalization over the joint probability density function, the integral of over the range is 1. The distribution of generated planets in is shown in the right-side histogram of Fig. 1(b). We now calculate the conditional probability distribution using , , and Bayes rule. The conditional probability distribution is a simple division of the joint probability density function by the marginalized probability density function to get We now have the tools to sample semimajor axis and planetary radius.To generate a planet from the probability distributions in this section, we first sample Eq. (7) using a single variable inverse transform sampler to get a set of planetary radius, . We also use a single variable inverse transform sampler to sample Eq. (8) and get . The and used in the SAG13 population are the same as in the Kepler-like planet population. SAG13’s frequency distribution is included above Fig. 1(b). The two-dimensional (2-D) contour plot and associated grid values in Fig. 1(b) show and interdependence. 2.1.3.Planet eccentricityWe assume a Rayleigh distribution for orbital eccentricities in both the Kepler-like and SAG13 planet populations as done in Ref. 24, such that the eccentricity () of the ’th simulated planet is where is a uniform random variable between 0 and 1, is the Rayleigh parameter for eccentricity, is the minimum allowed eccentricity, defined as 0, and is the upper 95th percentile for . The mean eccentricity . In Ref. 24, the authors fit the Rayleigh distribution to radial velocity-detected planets and found to be a best fit with -value of 0.5. They additionally found has a -value above 0.05 and placed strict limits of and .24 For this work, we use (splitting the difference between 0.225 and 0.125) as done in Ref. 19 and adopt their independence of star spectral type assumption.2.1.4.Planet Albedo from semimajor axisIn both the Kepler-like and SAG13 planet population, we calculate albedo (), using a cubic 2-D interpolant over metallicity and using the V-band column from Table 4 of Ref. 30. The individual values in Table 4 from Cahoy’s work are based on the atmospheric modeling of Jupiter and Neptune at from 0.8 to 10 AU over varying metallicity ranging from solar abundances to solar abundances. To accommodate these restrictions for each interpolated, we use the randomly generated truncated to be between 0.8 and 10 AU. This means any planets with AU will be treated as having 0.8 AU in the interpolant and conversely and planets with AU will be treated as having 10 AU in the interpolant. Jupiter at 0.8 AU in Cahoy’s work were found to be cloud-free, resulting in a geometric albedo of 0.322 which is comparable to Earth’s geometric albedo. We assume a uniform random planet metallicity multiplication factor between and . Metallicity of a planet is a descriptor for the abundance of elements heavier than hydrogen or helium in that planet. The least reflective planet would be a Neptune at 0.8 AU with solar abundance metallicity, which is slightly larger than the geometric albedo of Mercury and the Moon. The most reflective planet would be a Jupiter at 2 AU with solar abundance. Planets formed through different processes will have varying metallicity representative of the planet’s atmospheric metallic composition relative to solar abundances. 2.1.5.Joint probability density functionThe final remaining parameters needed to describe planets at the time of an observation are the physical location of the planets relative to their host stars (), which is defined by Eq. (1) in Ref. 31. We assume that the direction of the orbit eccentricity vector is uniformly distributed in space, so that the orbit inclination is sinusoidally distributed between 0 and , while the remaining Keplerian elements (longitude of the ascending node, argument of periapsis, and mean anomaly) are all uniformly distributed from 0 to .31 We sample all of the parameters aforementioned for each planet which is sufficient information to calculate a projected planet–star separation and difference in magnitude between the host star and the planet . We calculate the projected planet–star separation as We calculate using Eq. (4) from Ref. 7 and adopt their use of a Lambert phase function.Sampling all of the parameters described above for planets allows us to calculate the joint probability density of projected separation and star–planet magnitude difference, . As in Eq. (7) in Ref. 7, we assume independence between all parameters, except for semimajor axis and planet radius in the SAG13 population as well as the planet albedo dependence on planet radius. We bin these planets into a grid and fit a rectilinear bivariate spline to the resulting 2-D histogram. This rectilinear bivariate spline consists of integrable high order polynomials with total volume under the surface of 1. The resulting densities for the two populations are shown in Fig. 3. 2.2.Star CatalogWe need a list of target stars, along with their positions on the sky, distance (), and apparent brightness in B and V bands for our calculations of completeness and integration time. We derive the list of targets stars from the EXOCAT-1 star catalog discussed in Ref. 32, which contains a variety of targets out to a distance of . Some of these stars are missing photometric values, including the luminosity, absolute magnitude, V band bolometric correction, and the apparent VBHJK magnitudes, all of which we attempt to fill in by interpolating a table of standard stars by spectral type in Ref. 33. From the targets, we have an initial set of target stars which we pare down through a series of filters. There are many stars in the EXOCAT-1 star catalog with “not a number” data entries which can propagate through our equations so we remove targets with these kinds of data entries. Not all of these entries are absolutely necessary so some targets may have been unnecessarily filtered. Some of these star systems are binary systems which we do not account for in our equations so we omit them from . In addition, some stars have low apparent visual magnitudes so we filter any target stars we know would take excessively long to achieve a reasonable on. Missing data filter removes 429 targets with any fields containing a “not a number” value within the set of IPAC fields {hip_name, st_spttype, parx, st_vmag, st_j2m, st_h2m, st_vmk, st_dist, st_bmv, st_mbol, st_lbol, coords, st_pmra, st_pmdec, wds_sep} as well as whenever we could not fill in a missing photometric value of a star. Binary star filter removes 164 targets using the Washington Double Star catalog (filters targets with companion stars within ).34 Integration time cut-off filter removes 1436 targets with integration times , where is calculated assuming local zodiacal light to be mag , exozodiacal light with magnitude mag , [used on the equations in Refs. 4 and 35], and a working angle (see Sec. 7). This WA results in reasonable core mean intensity and throughout combination based on inspection of the instrument parameter files in Figs. 4(a) and 4(c). After paring down using a missing data filter, binary star filter, and integration time cut-off filter, is reduced down to 651 targets (the filters are not mutually exclusive). Our initial filtering of target stars reduces the number of degrees of freedom in the optimization process and increases computation time. Larger telescopes will result in larger target lists with less candidate stars removed via the initial integration time filter. The statically optimized target list is included in Table 7. This optimized target list contains mainly FGK-type stars and 8 A-type stars. All of these targets are located within 20 pc. 2.3.Calculating CompletenessCompleteness is calculated for the ’th target by integrating over the joint probability density function of and 15 The limits on the inner integrand are strictly obscurational. For star , at a distance from the Sun, the minimum planet–star separation observable is and the maximum planet–star separation is , where IWA and OWA are the instrument’s inner and outer working angles, respectively. For the outer integrand, we use the fundamental lower limit on as opposed to the analytical lower bound in Eq. (18) of Ref. 7. The upper limit on relates the completeness to the integration time. Typically, integration times () are defined as a function of a limiting and background flux levels, which are functions of the assumed instrument operating characteristics (i.e., throughput, contrast, etc.).4,35 We invert the integration time model based on Ref. 35 to find as a function of integration time to find Here, is the target star’s B–V color, implemented as an empirical fit to data from Ref. 33 (see Sec. 7), which is accurate to about 7% in the wavelength range as discussed in Ref. 36. SNR is the signal-to-noise ratio threshold chosen for determining planet detections.3 is the photon-counting efficiency of the system and is used to give the ideal planet count rate. is the instrument’s core throughput (see Sec. 7). is a function of WA, and by extension so the integral over is not separable unless we assume a single representative WA at which to evaluate each star. Assuming a WA substantially speeds up calculations, but may not be representative of instruments with strong dependence on WA. is the net background count rate, and is the net speckle residual count rate, including all postprocessing assumptions. We use the calculations of , , and from Ref. 35 and include them in Sec. 7. The spectral flux density is given as Here, is the zero-magnitude flux calculated as in Eq. (23) of Sec. 7 and presented in Ref. 36, is the pupil area, is the spectral bandwidth, is the detector quantum efficiency from Fig. 5 in Sec. 7, is the optical attenuation due to the science instrument, and is the optical attenuation due to the coronagraph, treated separately since a single instrument can have multiple coronagraphs. By integrating Eq. (11) with the limiting from Eq. (12), we arrive at a formulation for completeness as a direct function of integration time, .The theoretical maximum completeness for the ’th target () is found by integrating Eq. (11) to the upper limit of at (this assumes an infinite observing time is available). Using the WFIRST parameters, we see a universal upper limiting of 23.137. It is important to note that the inclusion of speckle residuals means that integrating past a certain point will not produce any improvements in the achieved SNR, meaning there is a specific time for every target past which it makes no sense to integrate further. While the equations derived thus far are sufficient to perform continuous integration time optimization, gradient-based solvers, such as the ones we employ as follows, all perform significantly better with analytical gradient functions. To expedite the optimization process in Sec. 2.4, we calculate the derivative of completeness with respect to integration time as a function of integration time. As in Ref. 15, the derivative of Eq. (12) with respect to time is and the derivative of completeness with respect to integration time is therefore We now have all the analytical expressions needed to optimize our planned observing schedule and have filtered down the original of freedom represented by the required integration times for each star in the input catalog to 651 (see Sec. 2.2). Input decision variables of 651 deg of freedom are still quite large and could take a long time to compute, especially given the nonconvexity of the sigmoid-shaped curves. In order to ensure fast convergence of the nonlinear optimization, we need to provide a feasible starting guess preferably close to the final solution. We know from experimentation with AYO4 that an optimal observation schedule will converge to a fixed . By combining Eq. (15) with Eq. (12) and inverting, we can find integration time as a function of which allows us to solve for integration time of all targets in a specific subgroup of at .15 This provides us with everything we need to both formulate the optimization problem and find a good initial solution, as described in the next section.2.4.Optimization ProcessOur goal is to maximize the number of unique detections throughout a blind-search survey. We evaluate the reward potential of a target using the completeness metric and the cost as the integration time required to achieve that completeness. In our optimization formulation, we seek to maximize the summed completeness that fits within the rigid mission time constraints coupled with an additional time cost per observation. Maximizing summed completeness with fixed overhead per observation in a time constrained mission is the full nonlinear optimization problem we aim to solve in this section. Our optimization process is broken down into three major steps:
Both step 1 and step 2 enable the solving of the full nonlinear optimization problem in step 3. Step 3 can be seeded with any “good” initial guess (an initial solution sufficiently close to an optimal solution). A “good” initial guess for SLSQP can even be an infeasible solution (i.e., in the case of an unexpected reduction in total observing time, we can seed step 3 with the previous solution to step 3). Optimization step 3 is fundamentally important because it allows individual adjustment of each , individual removal of targets, and individual addition of targets (something Ref. 4 cannot do) while conforming to all time-related constraints of the full nonlinear optimization problem. 2.4.1.Mission time constraintsWe formulate the summed completeness maximization problem to ensure any nonzero, fractional, observation incurs the observatory overhead and instrument settling time costs of making an observation. Settling time, , is time required by the observatory to start a new observation. This includes waiting out transient vibrations from the slew, time needed to reach thermal equilibrium, and time for the initial generation of the high-contrast region, either by “digging the dark hole” for a coronagraph or by completing the precision alignment required by an external starshade. Overhead time, , on the other hand, is any additional time required by the observatory during the science integration. This includes time reserved for momentum dumping and orbit maintenance (if these operations will interrupt science data collection), dark hole maintenance for coronagraphs, and stationkeeping burns for starshades. The inclusion of and makes it difficult to find an initial feasible solution in most cases, as the overhead time required for observing every target in the target list is typically greater than the total amount of mission time available. In the specific case of the WFIRST CGI explored here, we have over 650 days of overhead time associated with observing the full target list and days of allotted exoplanet observing time. This means that we cannot simply evenly distribute our available time between targets to get an initial state for the optimization, as this would generate a constraint-breaking total required time. In general, initializing gradient-based optimizations on nonlinear and nonconvex search spaces (such as the completeness versus integration time sigmoids) leads to poor optimizer performance and frequently results in no feasible solution being found. 2.4.2.Step 1: the binary integer programStep 1 is designed to create a guaranteed initial feasible solution to the nonlinear optimization problem and uses reasonable inputs. Step 1 uses an initial calculation of background count rate () and residual speckle count rate () using , , and . is the zodiacal light surface brightness, in , calculated using where the default we use in step 1 is a static 23.0 mag .3 is the exozodiacal light surface brightness in calculated using where the default we use in steps 1 to 3 is 22.0 mag .4 is the working angle used for calculating integration times (this sets the specific values of instrument contrast, throughput, and other angular-separation-dependent terms) and is . We use 0.3 arcsec for all targets in step 1 which is near a maximum balance of core mean intensity and throughput at based on Figs. 4(a) and 4(c). The aforementioned parameters are used to calculate an initial and . We additionally impose a constraint on the total time spent using as the maximum amount of time to spend observing and as a fixed overhead time for making any observation. We use the coin-OR branch and cut solver,13 as provided by Google OR-Tools to solve our BIPs, as described in Algorithm 1. This gives us an initial feasible solution of targets, denoted by , with integration times, , and summed completeness, .Algorithm 1Binary integer program x1*=BIP(c0,t0). 2.4.3.Step 2: scalar minimization with groupIn step 2, we reuse , , and specifying a solution tolerance of on a bounded scalar minimization problem with bounds on of [0,7]. We know at and is 0, but we have determined from software experiments that seven work well as an upper bound in this case. Realistically, this upper bound on should be , but only one target could achieve that value and the optimization fails so the upper bound on must be reachable by multiple targets. This minimization in step 2 uses the Python implemented scipy “minimize scalar” function, as described in Algorithm 2. Successful execution of this procedure produces a separate, feasible solution, distinct from the solution arrived at in step 1. By varying the and solving the BIP subproblem in Sec. 2.4.2, we achieve a different set of targets with integration times . Algorithm 2Bounded scalar minimization wrapping binary integer program. 2.4.4.Step 3: SLSQP minimizationIn step 3, we formulate the SLSQP optimization process with an initial solution seeded with or , whichever produces a larger summed completeness. In practice, we could take any as a good initial guess and resolve with new mission time constraints based on new information. This seeded solution should prove sufficiently close to an optimal solution such that the sigmoid-like inflection point is exceeded. We now replace our previous assumed with a more optimistic surface brightness. We calculate zodiacal light surface brightness every of a day for a year, interpolating the lookup tables from Ref. 37 as shown in Fig. 6. For our optimization, we specifically use the per annum minimum, for all targets excluding in keep-out regions shown in Fig. 7(a). This is crucial as the intensity of targets with 0 deg heliocentric ecliptic latitude, (), has local zodiacal light minimum within 56 deg of the observatory’s antisolar point (see Fig. 6), a region not visible to WFIRST due to solar panel pointing requirements in Table 3 (Algorithm 3). Algorithm 3SLSQP optimization. The output is an optimal allocation of integration times to each target accounting for the nonlinear overhead time assignment. This optimal allocation encodes our observation priority over all possible targets and means each target in the target list is equivalently important to observe to achieve the expected summed completeness. In the validation section, we choose the minimum zodiacal light intensity target selection metric to complement our optimization assumptions. The drawback is, if variations occur in the total mission time for any reason, high reward targets might not be observed. There are technically infinite “near-optimal” solutions. Equivalent maximum summed completeness target lists can be achieved for a range of different numbers of target stars. This is directly caused by the increasing number of approximately equivalent target stars at further stellar distances. These integration times are used as inputs to our validation process discussed in Sec. 2.5 and implemented in EXOSIMS, which runs a Monte Carlo of full mission survey simulations, observing each target for the integration times prescribed in and bookkeeping all dynamic aspects of the mission. 2.4.5.Optimization performanceBy breaking our optimization problem into three distinct parts, we see some benefits from each part. The BIP guarantees an optimal solution based on the input and has a subsecond solve time. The scalar minimization problem solves within a few seconds and achieves of the maximum completeness we achieve. The SLSQP part on WFIRST seeded with the scalar minimization solution marginally improves the summed completeness and takes a little over a minute. The solution arrived at by SLSQP is almost certainly a local minimum and not a global optimum. In this paper, rows 1, 2, and 3 in Table 2 are how we solved for the optimal observation target list and integration times, using as input to 1 and 2 of the optimization problem. We contrast optimizing with to in rows 4, 5, and 6 of Table 2 which causes a marginal decrease in summed completeness for the BIP and scalar minimization problems. Row 7 is optimization seeded with from row 6 and a slightly longer total observing time which solves in a shorter time than from the scratch optimization process. Rows 8, 9, 10, 11, and 12 are based on a substantially larger telescope which is reflected in the higher overall summed completeness. Solve times are strictly larger on the Big Telescope than for WFIRST, but we see rows 11 and 12, which are seeded with the optimal solution from row 10 have substantially shorter solve times than row 10. Table 2Optimization method solves times and summed completeness for WFIRST using different input zodiacal lights and solving with different maximum mission times.
2.5.ValidationWe validate the ability to schedule our optimized integration times from Sec. 2.4 and listed in Table 7 of Sec. 8, via a Monte Carlo of full survey simulations using the EXOSIMS code base, detailed in Ref. 16. This software allows us to bookkeep dynamic aspects of the mission while scheduling observations on randomly generated planetary systems about a real set of target stars. Using EXOSIMS, we account for exoplanet, solar system planet, and observatory orbit propagation as well as recalculating look vectors, keep-out regions, and local zodiacal light noise. At the start of each survey simulation, we randomly generate planets around stars; drawn from the Kepler-like or SAG13 planet populations discussed in Sec. 2.1 using sampling methods described in Ref. 16. To define the observable times of individual stars, we combine solar system planet locations with instrument-specific keep-out angles and observatory look vectors for each target in the star catalog throughout the duration of the mission. These solar system planet locations are taken from the Horizons Ephemeris System furnished by NASA’s Navigation and Ancillary Information Facility (NAIF)40,41 based on an assumed mission start modified Julian date (60,634 for WFIRST). The EXOSIMS framework default observatory orbit is a quasiperiodic, stable, halo orbit at the second Earth–Sun Lagrange point with period of days based on Ref. 42. We stitch the observatory position along the orbit using a one-dimensional (1-D) spline interpolant and propagate the observatory along the interpolant throughout MET assuming a general observatory start location on the Halo orbit when the Earth is at the Earth–Sun equinox (60,575.25 MJD for WFIRST). The starting location of the observatory for a single-visit blind search survey coronagraph mission has negligible impact on yield. We discuss the calculation of keep-out regions in Sec. 2.5.1. In each simulation, we incrementally filter available targets, simulate observations and their outcomes, and propagate orbits as shown in Fig. 8. At the start of each main loop shown in Fig. 8 (steps 1 to 9), filters remove targets with too long of integration times ( day); targets currently in keep-out regions of planetary bodies; previously observed stars not currently scheduled for revisits; and filters targets not observable within the nearest time constraint. In this paper, we do not revisit targets, so we filter out any targets that have been observed during the mission. We then use an intelligent method of choosing the next target at the current mission time, . Several selection metrics have been studied in Refs. 15, 17, and 43, but we choose to observe targets at their minimum zodiacal light intensity. Here, our method of selecting targets for observation advances time by the smallest amount . are the mission times when targets in have their next local zodiacal light minimum, . This identifies target star which we proceed to observe for the precalculated time from the method described in Sec. 2.4. We divide the observations into discrete time intervals and calculate the signal and noise at each interval, calculate the total achieved SNR, and propagate planets around the star as well as the observatory and solar system planets. Splitting up observations into time intervals enables us to approximate the achieved SNR via Riemann sum using the new location of the observatory, planets, and simulated exoplanets along with the associated zodiacal light, planet phase angle, and planet position in the instrument working angle. We divide our observations into two equal intervals over the duration of the integration time, , which are all strictly days in this case. We then advance the mission time by and check if the mission is over. The EXOSIMS framework relies upon probabilistic planet generation and random draws. However, our Python implementation is capable of not only replicating results but also fully reproducing each survey simulation run by resetting the simulation from the simulation’s random seed. EXOSIMS also keeps track of all inputs required to replicate the simulation. 2.5.1.Keep-out regionsWFIRST has keep-out regions specified in Table 3. We define these keep-out regions as a subset of the sky which cannot be entered by the telescope look vector throughout an observation. Our strict accounting for time and geometry in simulations enables us to ensure any observation’s look vector () does not start, stop, or pass through a keep-out region. Nominally, each solar system body in Table 3 has a minimum keep-out region the telescope cannot look within. The Sun has a maximum keep-out region of 124 deg the telescope cannot look outside of. This is set by the minimum incidence angle of light on the spacecraft’s solar panels to power the observatory and CGI. The bore-axis vector of WFIRST cannot point farther than 124 deg away from the spacecraft Sun vector () in order to meet spacecraft power requirements. There is an additional minimum solar panel keep-out angle at 56 deg which was not modeled. Since our implementation in EXOSIMS in this specific case enforces observations at local zodiacal light minimum indicated by the black squares shown in Fig. 6, these observations either occur near the spacecraft antisolar point or edge of the maximum solar panel keep-out region. Table 3Keep-out regions for WFIRST defined as the (minimum angle between r_Body/SC and r_i/SC, maximum angle between r_Body/SC and r_i/SC).
We cache a keep-out map to substantially increase simulation speed. This keep-out map spans the duration of the mission and is discretized into small time bins (1 day in this paper) and encodes the visibility of each star in the catalog. The keep-out map encodes the visibility status of targets at each point in time by calculating the angular distance between and the observatory to planet vectors. If the formed angle is less than the minimum keep-out angle or greater than the maximum keep-out angle for that body, the target star is marked as not visible. In Fig. 7(a), we see the target visibility for 100 stars in the star catalog ordered by right ascension with their visible times (white) and obstructing bodies. The Sun (yellow) is the major contributor to keep-out regions with the Earth (blue) being the second largest contributor. Star occultation by Mars, Venus, or Jupiter keep-out regions is nearly negligible. Figure 7(b) also shows some targets are visible of the time while others are visible all of the time. We can conclude from these plots that increasing WFIRST’s maximum solar keep-out angle or decreasing the minimum keep-out angle can increase the amount of time a target is visible by maximally day per change in degree. 2.5.2.Local zodiacal lightThe local zodiacal light intensity () is the largest, known, time varying noise source we account for and manifests itself in the background noise count rate introduced in Eq. (12) and explicitly written in Sec. 7. Variations in local zodiacal light can vary the summed completeness by up to 10%.17,43,44 In Sec. 2.4, for step 1 for our integration time optimization, we use a static of 23.0 mag as done in Ref. 4. However, in step 3, we optimize our final integration times using (or equivalently ) which implies specific times of the year when these observations can be made, as shown by the black lines in Fig. 6. The black points in Fig. 6 are curved at low , and this is directly caused by the large anti-solar keep-out region which marginally inhibits observation at . There is a distinction between the used in the integration time optimization and the used in the Monte Carlo validation. When we evaluate whether a detection has been made at the top of step 7 in Fig. 8, we calculate the planet SNR using based on where is the current time in the simulation. We calculate the interpolant in Fig. 6 using Eq. (19), a 2-D linear interpolation of intensity from Table 17 of Ref. 37 [], a quadratic interpolation of wavelength dependence from Table 19 in Ref. 37 {} and applying a Sun color correction of to get Here, and are the Planck’s constant and speed of light in a vacuum, and is a quadratic 1-D interpolant of Ref. 37 data in Fig. 9 of Sec. 7. Since the Table 17 from Ref. 37 is in the geocentric ecliptic coordinates, but the spacecraft will physically be located on an Earth–Sun L2 orbit, we assume the table’s and is coincident with , and the additional distance of the spacecraft from the Sun has no effect. Further discussion of these two assumptions is included in Sec. 7.Since the statically optimized integration time assumes a fixed zodiacal light, which coincides with a specific time of year and our actual observation takes into account the local zodiacal light conditions, our target selection method has an impact on the zodiacal light of the target we observe. In previous work, we tested different target selection metrics incorporating completeness, integration time, and deviations from the maximum or minimum zodiacal light intensity.15,17,42 For a statically optimized target list, we found the strategy observing at minimum zodiacal light to be most effective. However, this assumes we have all the observing time allotted. If we included instrument degradation due to radiation or possibly early mission terminations, other selection metrics which preferentially select higher performing targets may become more attractive. Using the 2-D interpolation of Fig. 6 combined with instrument specific filters in Sec. 3 and optimization process in Sec. 2.4, we can plot the histogram of minimum (black dots in Fig. 6) and maximum (at edge of solar keep-out) local zodiacal light intensity in Fig. 10. Depending on the time of year observations are made, the variation in zodiacal light intensity changes (excluding when targets are in keep-out) by upward of two magnitudes. We also see using an estimated of 23.0 mag is an overly optimistic approximation of the local zodiacal light noise. Realistically, for our filtered set of target stars, the appropriate mag and mag . Since and in Fig. 10 include the instrument-specific keep-out regions for WFIRST, increasing the maximum solar keep-out could decrease and decreasing the minimum solar keep-out angle would increase . 2.5.3.Detection and characterization observationsWhen making a detection observation of a planet, we do not use the same parameters as in the optimization process. Our algorithm for confirming a detection is also different. In a survey simulation, we define a planet as being detected when the collected . We calculate this SNR using Nemati’s SNR model from Ref. 35 discussed further in Sec. 7. The planet signal is and the noise is These equations are treated as a Riemann summation over the whole integration time () broken into segments. The Riemann summation approximates variations in local zodiacal light, exozodiacal light, planet phase angle, and changes in planet WA as well as the new resulting , , , and . The count rates we calculate here, , , and , use the same equations in Sec. 2.3 but with inputs specific to the planet. is updated based on the planet position at the current time propagated using the Kepler-state transition matrix. The new, physical, is calculated using Eq. (4) from Ref. 7 and the new planet phase angle calculated from a Lambert phase function. is updated based on the new observatory position along the halo orbit. We use a different calculation of following from Ref. 45, where Here, is the visual magnitude of the exozodiacal light scaled relative to the Sun. is the supplementary angle to the planet’s orbital inclination (in degrees) resulting in a range from [0 deg, 90 deg] and is mirrored in the range [90 deg, 180 deg] based on Ref. 46. In multiplanetary systems, we assume the exozodiacal light of each planet has the inclination of each planet. The inverse term accounts for the decreasing exozodiacal light contribution with increasing distance from the planet and star, as discussed in Ref. 4.We include spectral characterization observations in our survey simulations but not in our optimization process. This is due to both the rarity in finding a strong enough planet signal to make characterization of with WFIRST and the high mission value placed on acquiring spectral characterizations. We do not model characterizations in our optimization process because it adds complexity and we have no good method of weighting new planet detections to spectral characterizations. We attempt to immediately perform a spectral characterization upon successfully detecting any new exoplanet. We calculate integration times for this characterization observation using the measured , , , and at the current time as well as the characterization mode parameters in Sec. 7. We check if the new integration time plus overhead would exceed the total mission time, would enter into a keep-out region, or would exceed the 30-day integration cutoff filter. In the vast majority of cases, an SNR of 10 is not achievable given what we know about the exoplanet from a single detection. In addition, we make observations at local zodiacal light minimum immediately before a planet enters keep-out or immediately after a planet exits keep-out causing approximately half of the successfully detected planets to enter keep-out immediately following their observation. 2.5.4.ConvergenceWe determine the required number of simulations to ensure the accuracy of our results by executing 10,000 simulations of a generic input specification similar to that used in Sec. 3. By assuming the true mean yield is equivalent to the mean yield over 10,000 simulations (), we can demonstrate convergence of to , where in this section is the number of simulations in the ensemble. We calculate the mean yield from a subset of ensembles incrementally for each using Eq. (38) from Ref. 47. We then calculate the percent deviation between and shown in Fig. 11. Figure 11 shows the mean number of unique detections converges for a sufficiently large number of simulations. By assuming the mean number of detections per run are normally distributed, we show the , , and confidence intervals of the mean in Fig. 11. In reality, the ensembles are some form of gamma distribution because the numbers are all positive and priors are exponentially distributed. This normal distribution assumption fits better for high yield telescopes. The absolute percent errors for varying confidence intervals for 100 and 1000 simulations are presented in Table 4. Absolute percent errors represent the uncertainty in the mean number of unique detections. Excluding Table 4 and Fig. 11, all results in this paper are derived from single simulations or ensembles of 1000 simulations. We use the confidence interval to make comparisons and determine whether a result is significant. We can say a general mean number of unique detections is accurate to within at for 1000 simulations. Table 4Absolute percent error confidence intervals for 100 and 1000 simulations. This table references data created using runs from Dean22May18RS09CXXfZ01OB01PP01SU01 and file convergenceDATA_Dean22May18RS09CXXfZ01OB01PP01SU01_2019_04_09_01_23_.txt.
3.WFIRST ResultsThe results in this section are generated using all previously discussed assumptions and parameters as well as the parameters in Table 6 of Sec. 7 based on the cycle 6 description of the CGI.18 3.1.Completeness and Planned ObservationsBy applying the optimization process from Sec. 2.4 to the target list filtered in Sec. 2.2 and assuming a Kepler-like planet population with per observation day and maximum observing time of day (3 months), we get the integration times in Table 7 in Sec. 8. The planned summed completeness of this target list is . Since completeness is the probability of detecting planets from a population around a star, multiplying by the population occurrence rate ( from Sec. 2.1) gives the expectation value of planets detected, in this case equal to 5.58 detections. We calculate the ultimate completeness10 for all 651 potential targets by evaluating at an infinite integration time () to get and an expectation value of 21.40 exoplanets detected. We note here that the theoretical maximum summed completeness, if there were no or WA constraints, for all 651 target stars is 651. The summed ultimate completeness of only the targets in Table 7 is with an expectation value of 7.10 exoplanets detected. The ratio is a measure of planned target list yield to the maximum theoretical summed completeness observing all 651 targets with . The ratio is a measure of the summed completeness of the planned target list to the maximum theoretical summed completeness of only the target stars observed with . These ratios represent the fraction of planet phase space about all scheduled targets that could be probed in the assumed, finite, total mission time. We can conclude from these results that, in the short amount of time allocated for WFIRST, our optimized target list is capable of observing 78.6% of the summed ultimate completeness it could possibly gather. There is a non-negligible difference between the planned completeness, , and the completeness actually observed . The planned observations and Monte Carlo simulations entirely based on the Kepler-like population are shown in Fig. 12. Note the loss of one observation between the planned and actual observations, which we attribute to accumulation of machine precision errors and our strict adherence to the upper bound on observing time as well as the allowance of characterizations in the implemented survey. The loss of an observation is evident by the single red square without an associated blue circle in Fig. 12. The actually observed in this particular simulation of the Monte Carlo was 2.33. For each observation made in a survey simulation, observed completeness (blue circles) coincident with planned completeness (red squares) indicates each simulated observation occurs under optimal conditions for a single visit. Because we are observing targets solely at the local zodiacal light minimum and do not modify integration times, all observations made are optimal. If targets were observed at suboptimal zodiacal light levels, the would be below the as shown in Ref. 17. From the combined time varying limit in Eq. (12) applied to Eq. (11), we get the black sigmoid-like lines in Fig. 12, specifically plotting for the top 10, median, and lowest completeness optimized targets in the target list. The median and lowest lines are characteristic for the majority of similar low completeness targets and the addition of targets will typically be below the lowest completeness target in this list. The and completeness side histogram shows a clustering of targets at lower completeness values, which can be attributed to the increased number of targets at larger . The upper limit of completeness lines is consistent with the theoretical maximum completeness values. Demonstrating the importance of including overhead and settling times in observations are the max diamonds in Fig. 12. These are universally located at some small integration time (, ) and small completeness meaning any optimized target list maximizing without overhead constraints will have strictly nonzero , . This means optimizing summed completeness without and results in an observation target list of length (651 in the case of WFIRST), which cannot be executed under realistic conditions. Similarly, observing at would result in suboptimal individual target completeness and also leaves a substantial amount of unobserved phase space around each target. Calculating completeness using the SAG13 planet population shows the summed ultimate completeness of all 651 targets is 13.538. The summed ultimate completeness of the observed targets is 3.641. The minimum completeness of these targets has increased due to the increased likelihood of larger planets at larger in the SAG13 distribution in Fig. 3(b). As in the Kepler-like optimized target list, we see a single target is not being observed due to strict enforcement of observing time constraints. The major difference from Figs. 12 and 13 is the change in shape of the lines. The SAG13 lines have a lower ultimate completeness, but more targets have this limit. In addition, each of these targets is observed for shorter integration times, which mostly have a higher theoretical maximum completeness and general shift toward higher completeness at lower integration times. There is additionally a larger separation of lower completeness targets in Fig. 13 compared to Fig. 12. In general, completeness calculated using the SAG13 population is larger than completeness calculated using the Kepler-like population. 3.2.Sky Distribution of Completeness, Integration Times, and TargetsWe are able to take the optimized target list included in Table 7 and bin the heliocentric ecliptic coordinates of each target, into triangular regions of approximately equivalent size and approximately isotropic distribution on the sky. When we sum integration time for all targets in each bin and normalize by bin area, we get the skymap distribution shown in Fig. 14. Since Fig. 12 shows all integration times are between 0.1 and 2 days, we can conclude the and bins have the highest concentrations of observing time. By summing the bins over heliocentric ecliptic latitudes (), we see a large disparity in versus , target count versus , and versus . Since WFIRST is planned to be on an L2 halo orbit and has a Sun-orbital period of days, the Leinert local zodiacal light37 is fixed in this rotating frame, and the time-distribution of stars is uneven, the optimally scheduled mission will have preferential observing times consistent with the distribution in Fig. 14. This result is important for optimally distributing limited CGI time under the constraints of a 5-year mission shared with multiple other instruments. Using this distribution, we can create preferentially distributed observing blocks for the CGI. 3.3.Detected Planet PropertiesFrom our ensembles of survey simulations, we can look at how and of the detected planets are distributed. The top row of Fig. 1 is the average distribution of and for all generated planets in a universe. In each subplot of Fig. 1, the top left and bottom right show the number of simulations used to generate the resulting distribution. Each 2-D contour plot is normalized such that the integral over the area is 1 for the individual ensemble so the color scale can be shared. The white number in each gridded region shows the average number of planets generated or detected per simulation in that bin over the ensemble of simulations. The number in the top right of Figs. 1(a) and 1(b) is the sum total of all planets generated in the ensemble of universes. Subplots of Figs. 1(c)–1(f) show the average distribution of detected planets over an ensemble. The number in the top right of Figs. 1(c)–1(f) is the sum total of all planets detected in the ensemble. These summations have been tabulated as average yields in Table 5. Subplots of Figs. 1(c) and 1(d) use a target list optimized for the Kepler-like planet population to observe universes of Kepler-like and SAG13 simulated planets. Similarly, subplots of Fig. 1(e) and 1(f) use a target list optimized for the SAG13 planet population to observe universes of Kepler-like and SAG13 simulated planets. Table 5Summary of overfitting average unique detection yield and average characterizations from four Monte Carlo ensembles with optimized target list integration times calculated for different planet populations observing universes of different planets WFIRSTCompSpecPriors_WFIRSTcycle6core_3mo_405_19.
We can do an analysis on the kinds of planets WFIRST is expected to detect in a blind search survey. Since our universe is randomly generated, we show the distribution of generated versus planets for all stars for the Kepler-like and SAG13 planet populations in Fig. 1. The implemented planet generation rate for the universe of the Kepler-like planets is , consistent to two decimal places with the planet population model in Sec. 2.1. Here, is simply calculated by dividing the sum total of planets in the ensemble of universes by the number of simulations in the ensemble (1000 from Sec. 2.5.4) and number of target stars (651 from Sec. 2.2). The for SAG13 is also consistent to within two decimal places with the model in Sec. 2.1. The generated planet populations are consistent with the limits presented in Sec. 2.1. In Sec. 2.1, we showed the smallest observable planet–star separation observable to be 0.292 AU and each of the four detected planets plots shows all planets detected have AU. A distinctive feature of the planet generation is the “knee” applied at 10 AU which can be seen by the sharp drop-off in both populations. There is a marginal difference in the axis of the contour plots between Kepler-like and SAG13 populations, as SAG13 generates smaller planet radii than Kepler-like, so direct comparisons cannot be made between individual gridspace averages across universes. We also note the SAG13 universe generates an order of magnitude more large and large planets as indicated by the 24.29 and 221.19 grids from Kepler-like and SAG13, respectively [Figs. 1(a) and 1(b) gridspace (4,4)]. Comparisons between planet populations used to calculate completeness observing the same planet population universe indicates that optimizing with the Kepler-like population results in marginally smaller planet detections. Specifically for the Kepler-like universe’s largest average detection bin, we see that optimizing with the Kepler-like planet population results in 1.35 detected exoplanets on average and optimizing with the SAG13 planet population results in 1.11 detected exoplanets on average, which is significant, given our convergence results above. For the SAG13 universe’s largest averaged detection bin, we see optimizing with the Kepler-like planet population results in the 3.51 detections and optimizing with SAG13 yields 3.55 detections but scaling by the number of detected planets gives which is nearly identical. From these observations, we can conclude observing a universe of SAG13 planets with integration times optimized using completeness from a Kepler-like planet population result in more detections of small planets. Each gridspace in the bottom two rows of Fig. 1(d) is greater than or equal to each gridspace in the bottom two rows of Fig. 1(f). We can specifically point to the 0.81 and 0.69 grid spaces that most evidently confirms this observation. We can draw several conclusions by inspecting the average population of planets detected. Our simulations show WFIRST will not detect planets with . We also see WFIRST is not capable of detecting planets with . We also note that WFIRST is not sensitive to planets beyond . The majority of planets detectable by WFIRST are cold-Jovians and Super Earths. Based on Fig. 1, we can conclude that optimizing integration times with the more pessimistic Kepler-like planet population universally biases detections toward smaller planets. 3.4.OverfittingWe have chosen to use the Kepler-like and SAG13 planet populations to optimize target lists in this paper, both of which are created based on the known population of exoplanets. Since a motivation for the WFIRST CGI is to observe new exoplanets in an unexplored region of space, we must investigate how yield for a target list of integration times optimized for one planet population changes when observing a universe full of planets based on another planet population. Optimizing a target list using completeness based on a planet population and observing that same planet population results in the highest yield in given in Table 5. However, observing a universe of Kepler-like planets with a target list optimized for SAG13 planets results in a 5.06% decrease in exoplanets detected, a greater decrease than observing SAG13 planets with a target list optimized using Kepler-like planets (a 1.01% decrease). A possible explanation for this difference is our inclusion of the rare characterization. The characterization part of Table 5 does not indicate this is the case and shows more planets are characterized on average when optimized with the wrong planet population range. A characterization observation is triggered whenever a planet is detected with sufficiently small and separation such that the immediate reobservation could achieve an with a newly calculated days and all other detection observation filters in Sec. 2.5. Since the observation would be carried out with a different instrument, the changed parameters are included in Sec. 7. The conclusion we can draw from this exercise in overfitting is that optimizing with the more pessimistic Kepler-like planet population yields more robust detections and more characterizations if the planet population is actually SAG13. This warrants reinvestigation with varying margins placed on . 4.ConclusionsIn this work, we presented our method of optimizing integration times and validating the optimized target list in a Monte Carlo of survey simulations using the EXOSIMS code base. We presented our implementation of a Kepler-like and SAG13-based planet populations and our associated methods for calculating completeness and simulating universes of planets. We presented our generalized target list integration time optimization process accommodating per observation overhead time constraints for a blind search single-visit survey and showed how the inclusion of overhead times is necessary when optimizing surveys. We discussed how EXOSIMS book-keeps all time varying aspects of a survey mission including keep-out regions of the Sun and major planets using ephemeridis, the observatory’s position on an L2 halo orbit, local zodiacal light for each target, positions of the target stars, positions of simulated planets around target stars, and our strict enforcement of total observing time. Under WFIRST’s constraints, we show the optimal heliocentric ecliptic longitudes for target observation based on local zodiacal light intensity as well as more realistic local zodiacal light average minimum and average maximum magnitudes of 22.79 and 21.59 mag , respectively. The strictness of WFIRST’s keep-out regions causes the visibility of some targets to drop below 28% which severely constrains both single visits and would constrain revisits. Making telescopes less sensitive to solar panel-based and planet-based keep-out regions keep-out regions enable observing at more ideal local zodiacal light conditions, increase the total target visibility, and will prevent aliasing of similar period planets when considering revisits. By making only optimal observations in our Monte Carlo survey simulations, we see the WFIRST optimized target list’s preferential observing times are unevenly distributed on the sky and across heliocentric ecliptic longitudes, but these longitudes do not necessarily line up with the completeness longitude distribution. The observed mismatch between the observing time distribution across the sky and the completeness distribution across the sky means observation at specific times of year will be more valuable than others throughout the mission. Most importantly, it means that in the case where a mission is designed with preallocated observing blocks, blocks reserved for exoplanet imaging must take into account both the integration time and completeness distributions over ecliptic longitudes. We validated our WFIRST optimized target list’s summed completeness of 2.35 by performing a Monte Carlo of survey simulations on the Kepler-like planet population, achieving an observed summed completeness of 2.33 using our model of WFIRST. We have also demonstrated the convergence of the mean number of detected exoplanets from Monte Carlo of 1000 simulations to have 3.19% error at , giving additional confidence in the Monte Carlo results. Our Monte Carlo simulations also indicate that the WFIRST CGI detects 5.48 exoplanets, on average, when observing the Kepler-like planet population and 16.26 exoplanets on average when observing the SAG13 distribution. The vast majority of the WFIRST detectedable planets range in size from Super-Earths to Jovians at semimajor axes from 0.4 to 6 AU. Observing a population of the Kepler-like planets with a target list optimized for a SAG13 population of planets results in a yield decrease in detections of whereas the inverse indicates a yield decrease indicating the Kepler-like planet population optimized target list is less sensitive to variations in the underlying planet populations and should be used to optimize CGI target lists. We also see that optimizing with a more pessimistic planet population universally results in more detection of small radius planets. The simulation inputs and results included in this paper are publicly available via Cornell’s eCommons service at: https://doi.org/10.7298/90n2-5j40. 5.Future WorkThere are many improvements to be made in the input assumptions and underlying models used. We can likely improve yield by increasing the number of targets available to use in optimization by filtering stars missing only the parameters we use in our calculations instead of the blanket missing data filter we use currently. WFIRST might not be able to detect small planets, but improving our planetary albedo interpolant for small rocky bodies would be useful for larger and more capable telescopes. The local zodiacal light model used is realistically only suitable for Earth and L2-based telescopes and does not show the high-resolution structure other missions have found. A randomly generated star inclination could also be used for calculating the exozodiacal light at the planet. There are better exozodiacal light models we could use when simulating detection observations as well as better estimates for assuming in the initial optimization step. We could also use a prediction of the planets angular separation to predict the at the next observation. A newer version of Nemati’s SNR model with more parameters could more realistically model WFIRST and other future telescopes. We specifically omit the effects of radiation in Nemati’s model in order to ensure we can calculate . Including this (or instrumentation faults) would incentivize us to select a selection metric with some preference toward higher completeness targets early on in the mission. WFIRST is limited by not only keep-out regions and but also observing blocks. The solar panel power keep-out region of WFIRST is substantial and, while not significantly impacting single-visit direct detection yield, will greatly impact revisit yield and orbit characterization of planets with orbital period similar to Earth. An analysis on the subspace of the Earth-like planets unobservable caused by the selection of an orbit with an Earth-like period should be conducted. In our implementation, we make observations at local zodiacal light minimums, but some targets have two local zodiacal light minimums. Our first observation of a target should preferentially be made when targets exit keep-out so subsequent detection or characterization observations can be made in that observing block. Our scheduling process does not assume the existence of observing blocks, which would further limit visibility of some targets and could force observations at suboptimal observing times. Since our survey simulation is allowed to make characterization observations and our optimization does not account for characterizations or revisits, adding some heuristic describing the expected time making orbit characterizations or spectral characterizations can improve overall yield of a survey. WFIRST is not particularly sensitive to this issue because of a relatively short CGI mission time with a lower throughput resulting in a target-starved environment and less characterization candidates. Running SLSQP on larger, more capable observatory designs such as HabEx and LUVOIR result in unplanned characterizations at nearly every star which consumes a large portion of mission time and many planned detections are not made. The multiplanetary systems simulated around stars are statistically consistent with the Kepler-like or SAG13 planet population occurrence rates, but these planets do not share the same ecliptic plane and are likely in unstable orbits. Simulating a random inclination for each target star and then randomly generating planet deviations from the star’s inclination would be more useful and more representative of real star systems. This could enable the estimation of some orbital properties from images of stars with multiple planets. By simulating real planetary systems, it could be possible to determine whether a planet could exist in the habitable zone merely by characterizing the orbit of a Jupiter-like planet. Applying some simple star system stability filters on subsets of planets around stars would enable us to generate more realistic star systems and even help in fitting orbits of multiplanetary systems. Science reward for various images and discoveries needs to be enumerated, categorized, and weighted to accommodate more than single-visit unique detections.48 Future survey mission optimization and planning will require orbital characterizations in addition to spectral characterizations and will heavily weight exo-Earths. If the only discovery of scientific value is a specific type exoplanet with full orbital and spectral characterization, then incremental metrics and probabilities need to be created informing a mission optimizer or operator the probability (after subsequent images) that a potential Earth-like exoplanet is indeed an Earth-like exoplanet. The first step in this is determining the probability a detected planet with a and is of a specific exoplanet type based on the planet binning scheme in Ref. 21. In addition, the reward associated with revisits needs to be separated by planet into some set of orbital parameters, and planet properties as well as new planet detection potential. With these newly formulated metrics, it will be possible to construct a dynamic program which calculates the incremental expected value from new observations at each step of the mission to maximize orbital and spectral characterizations of Earth-like exoplanets. 6.Appendix A6.1.SymbolsThroughout this paper, we use some common subscript conventions and notation discussed here. We use a generic variable to describe these conventions. The subscript , such as in or , is referring to a or of an individual planet. The subscript , such as in or , is referring to a or of a individual target star. The subscript min, such as in or , describes that this parameter is a minimum of or ; this extends to the max subscript. We use bold variables such as when we are referring to a set or collection of something. We use underlines such as to identify position vectors and to identify unit vectors. We make use of common variables differentiated by their subscript or boldness to differentiate between what they specifically refer to. All parameters are related to efficiency of something. Lower case refers to a semimajor axis-related parameter. Lower case refers to an albedo-related parameter. Lower case refers to a completeness. Lower case refers to a direction or position vector. variables are all time related. refers to a planet–star separation related value. refers to a semimajor axis-related value. refers to a planet radius-related value. 7.Appendix BThis appendix contains functions and interpolants used in the calculation of in Eq. (12), including expressions for the zero-magnitude flux, star apparent V magnitude, throughput, intensity transmission of extended background sources, core mean intensity, core area, and quantum efficiency. 7.1.Zero-Magnitude FluxThe zero-magnitude flux, , used in the calculation of spectral flux density, , in Eqs. (12), (13), and (19), is given as from Ref. 36.7.2.Star Apparent V MagnitudeThe calculation of relies upon the star’s color-adjusted visual magnitude, , given by the parametric equation from Ref. 36: is the V-band apparent magnitude of the ’th target star and is taken from EXOCAT star catalog used in Ref. 32. is the color of the star as measured by the difference between B and V bands (in magnitudes). The parameters for the target stars used in the final target list are included in Table 7 in Sec. 8.7.3.WFIRST Cycle 6 ParametersThis section presents the WFIRST cycle 6 parameters18 in Table 6 used to calculate completeness and integration times. Table 6WFIRST cycle 6 input parameters18 as defined by the EXOSIMS11 JSON input script.
7.4.Characterization ParametersIn this work, we include characterization observations, which slightly detract from executing the planned observation schedule in Table 7 of Sec. 8. These characterization observations immediately follow the detection of a planet so long as the star is not obstructed by a keep-out region, is not filtered by integration time, and there is sufficient time to make the observation before the target enters a keep-out region. A detected planet is considered for characterization if . These characterization observations use slightly modified set of parameters. Our characterization observation integration time is calculated using , at , , , , , and . , the spectral resolving power specifically changes the to . 7.5.WFIRST Cycle 6 Derivative 2-D InterpolantsThe fits files used for 2-D interpolants of core throughput , intensity transmission of extended background sources , core mean intensity , core area , and quantum efficiency are derivative of Refs. 18, 49, and 50. 7.6.Electron Count RatesThis section lays out Nemati’s SNR equation and all subcomponents from Ref. 35. The SNR equation used in this paper is This uses , , and which are the planet signal, background signal, and residual speckle spatial structure in electron count rates in , respectively.We calculate using NCTE is the net charge transfer efficiency.We calculate using , , , , , and are electron count rates with units of for starlight residual, zodiacal light, exozodiacal light, dark current, clock-induced charge, and readout noise, respectively. ENF is an excess noise factor.We calculate the starlight residual count rate, , for each target star as where is the number of pixels in the photometric aperture [] calculated as where PPL is the number of pixels per lenslet and is simply the , a parameter specified in Table 6, where PS is the detector pixel scale in arcsec per pixel. is the core mean intensity from Fig. 4 in Sec. 7. is given by Eq. (24) in Sec. 7.We calculate the zodiacal light count rate, , using where comes from Fig. 4 in Sec. 7.We calculate the exozodiacal light attributed count rate, , as We calculate the dark current count rate, , as where is the dark current per pixel.We calculate clock-induced charge count rate, , as where CIC is the clock-induced charge per pixel and is the exposure time.We calculate the readout noise count rate, , as where is the readout noise per pixel.We calculate using where is the post-processing efficiency.Using these photon count rate models in applied to HIP 25278, a fifth magnitude star assuming , , with a planet at and , at , using the instrument parameters in Fig. 4, we get the photon count rates , , and . 7.7.Local Zodiacal LightCalculation of local zodiacal light is broken down into two major components: an intensity wavelength dependence correction factor, , and intensity at the spacecraft centered look vector, . For , we know the zodiacal dust cloud has structure, but the degree to which structure and phase/scattering properties contribute to the zodiacal light intensity from a general observer location in space is currently uncertain (although missions have been proposed to model such a dust cloud51). Knowing the degree of contribution determines whether the antisolar point of Table 17 of Ref. 37 should be modeled as fixed relative to the Earth or fixed relative to the observer. To simplify our work, we assume the latter so when and the corresponding antisolar point is when and . To calculate local zodiacal light, we first find the position of the observatory in the heliocentric ecliptic frame . We then calculate We get the longitude of the Sun relative to the spacecraft in the heliocentric frame . We find the position vector describing the star position in the heliocentric true ecliptic frame and calculate the star position with respect to the observatory . We then transform into spherical coordinates using Astropy’s SkyCoord and extract the target star’s latitude () and longitude () relative to the spacecraft. We then convert to absolute values for interpolation in the latitude and longitude range of Fig. 6 ( and ) by and , respectively. This and are used in , a linear gridded interpolation of Table 17 in Ref. 37 and by extension in Eq. (19).To assess the validity of our spacecraft centered versus geocentric ecliptic frame, we need to assess how much the angular position of zodiacal light intensity interpolant inputs would differ. Since WFIRST is on an L2 Halo orbit, its out of ecliptic motion is AU and orbital distance from the Sun is AU, resulting in a geocentric ecliptic frame interpolation input deviation of . When is 180 deg or 0 deg from , the and used for interpolation are correct. However, interpolating for a target at say has the value somewhere between due to the actual position of the spacecraft at the L2 Halo orbit and not Earth. We expect for a Sun–Earth L2 orbit. We now make note that the smallest griddspacing of the input data is 5 deg meaning and are within these bounds. It is also important to note the accuracy of this zodiacal light model is, at best, 10%.37 The final component necessary to complete Eq. (19) is , a wavelength correction factor, which has a detailed explanation in Ref. 37. 7.8.SAG 13 Occurrence Rate DerivationAt some point, we need to convert occurrence rate models from period () space into semimajor axis () space. We chose to do this at the probability distribution level so we may arrive at a distribution of versus , which will be a function of . In this section, we convert the joint probability density function of planet occurrence rate in and space from the study analysis group 13 model in Ref. 28 to and space. The parametric fit exoplanet occurrence rate model they present is Here, refers to the planet period in years which is defined in the SAG 13 model over the range days. must be in units of and must be in units of years. The constants , , and all refer to constants from Ref. 28 included in, where in this instance refers to whether or is used. We would like to transform this occurrence rate distribution into linear functions of and .We first need to convert this parametric model fit from log-scaled distributions such as and to linear scale distributions of the form and . The general expression has derivative which we can apply to both of these separable parameters since the , , and terms are all constant. We arrive at the linear distribution of occurrence rates over and as We now need to convert from an occurrence rate model in terms of to . This conversion exists in Ref. 52 as Here, where is the mass of the Sun and is the gravitational constant. is close to the average mass of spectral type stars, the spectral type the SAG 13 occurrence rate model is defined for. We assume the SAG 13 model extends to stars of all spectral types. To replace we calculate the partial derivative of Eq. (39) with respect to as Substituting this into the linear occurrence rate model in Eq. (38) gives the linear SAG 13 occurrence rate model in terms of and of The Keck Planet Search used radial velocity observations of exoplanets to describe an occurrence rate as a power-law in planet mass and orbital period, but was limited to planets with days ( AU).23 However, subsequent direct imaging surveys by the Gemini Planet Imager discovered the power-law did not extend throughout the entirety of the instrument’s sensitivity range and supports that planet occurrence rates approach 0 between 10 and 100 AU.25 Reference 26 introduces a period break to describe giant planet occurrence turnover in radial velocity data and presents many possible knee values ranging from to AU, depending on the model fit being used. We transform their period break into a similar semimajor axis knee value of AU making the adjusted SAG 13 exoplanet occurrence rate model The total planet occurrence rate () over an assumed planet range is given by marginalizing over both parameters of the SAG 13 occurrence rate model with the appended to get It is important to note that a cubic form of the roll-off on the semimajor axis distribution is used, rather than the quadratic form from the Kepler-like case [c.f., Eq. (1)]. This is motivated by recent limits placed on wide-separation planets by direct imaging and longer-baseline radial velocity data.26 We now calculate the joint probability density function of and by normalizing based on the integral over the occurrence rate parametric model to arrive at The integral of this joint probability distribution over the and range is 1.Previous work in Ref. 22 that originally derived the equations in this section showed that independently sampling and sampling a marginalization over of Eq. (44) could achieve less biased sampling of the exoplanet distribution. We marginalize Eq. (44) over to arrive at an intermediate constant We use to find the probability density function of semimajor axis conditional on of We can independently sample the planetary radius distribution and subsequently sample the probability density function of semimajor axis conditional on the sampled .We demonstrate the similitude and differences between our EXOSIMS implementation of the SAG 13 model from Ref. 28 in Fig. 2. In Fig. 2(a), we directly replicate linear exoplanet occurrence rate model around spectral type stars from Ref. 28 included in Eq. (38). The purple numbers represent the double integral over their respective bin areas. The largest absolute difference of these numbers from those in Ref. 28 is 0.74 in the range of and . The largest percent difference of these numbers is 13.8% in the range of and . We transformed these into percentages which sum to 100% over the whole grid. The occurrence rates and percentages of Fig. 2(b) are calculated by integrating over the transformation of the SAG 13 grid to space with Eq. (41). The integrated values in each bin are identical to their counterpart in Fig. 2(a). Finally, Fig. 2(c) is the probability density function used in EXOSIMS with the “knee” included. The coloring difference between (c) and (a) or (b) is because (c) is a per bin area density and the bin areas of the top right bin is several orders of magnitude larger than the bottom left bin in both (a) and (b). 8.Appendix CUsing the SLSQP optimization method presented in Sec. 2.4 on the planet population discussed in Sec. 2.1, we arrived at the planned observation target list in Table 7. Table 7Planned observation target list optimized using the Kepler-like planet population. sInd refers to the index of the planet in the filtered, initial target list of 651 targets. Name refers to the Hipparcos star catalog name of the target star. Vmag refers to the visual magnitude of the target star. di is the distance from the Sun to the host star. BV is the color of the star as measured by the difference in B and V bands (in magnitudes). tobs,i is the observing time planned by the optimization algorithm. ci(tobs) is the expected completeness reward for making an observation of this star for the prescribed integration time. The “known planet” column in Table 7 was generated by taking all target star names in the optimized target list derived from the EXOCAT-1 star catalog, and cross-referencing them using a list of aliases from SIMBAD and the NASA Exoplanet Archive. In total, 9 of the 60 planned targets already have known exoplanets. The final column contains each target star’s spectral type. Data in this table are taken from C0vsT0andCvsTDATA_WFIRSTcycle6core_CKL2_PPKL2_2019_10_07_11_40_.txt.
AcknowledgmentsThis research has made use of NASA’s NAIF planetary data system kernels. This research has made use of the Washington Double Star Catalog maintained at the U.S. Naval Observatory. This work was supported by the NASA Space Grant Graduate Fellowship from the New York Space Grant Consortium, NASA Grant Nos. NNX14AD99G (GSFC), NNX15AJ67G (WFIRST Preparatory Science), and NNG16PJ24C (WFIRST Science Investigation Teams). This research made use of astropy, a community-developed core Python package for Astronomy (Astropy Collaboration, 2018) and OR-Tools, an optimization utility package made by Google Inc. with community support. This research has made use of the Imaging Mission Database, which is operated by the Space Imaging and Optical Systems Lab at Cornell University. The database includes content from the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program, and from the SIMBAD database, operated at CDS, Strasbourg, France. References
“Panel reports: new worlds, new horizons in astronomy and astrophysics,”
Washington, DC
(2011). Google Scholar
D. Spergel et al.,
“Wide-field infrarred survey telescope-astrophysics focused telescope assets WFIRST-AFTA 2015 report,”
(2015). Google Scholar
R. A. Brown,
“Single-visit photometric and obscurational completeness,”
Astrophys. J., 624 1010
–1024
(2005). https://doi.org/10.1086/apj.2005.624.issue-2 ASJOAB 0004-637X Google Scholar
C. C. Stark et al.,
“Maximizing the exoearth candidate yield from a future direct imaging mission,”
Astrophys. J., 795
(2), 122
(2014). https://doi.org/10.1088/0004-637X/795/2/122 ASJOAB 0004-637X Google Scholar
“HabEx interim report,”
(2018). Google Scholar
“LUVOIR interim report,”
(2018). Google Scholar
D. Garrett and D. Savransky,
“Analytical formulation of the single-visit completeness joint probability density function,”
Astrophys. J., 828 20
(2016). https://doi.org/10.3847/0004-637X/828/1/20 ASJOAB 0004-637X Google Scholar
D. Garrett, D. Savransky and B. Macintosh,
“A simple depth-of-search metric for exoplanet imaging surveys,”
Astronom. J., 154
(2), 47
(2017). https://doi.org/10.3847/1538-3881/aa78f6 ANJOAA 0004-6256 Google Scholar
S. L. Hunyadi, S. B. Shaklan and R. A. Brown,
“The lighter side of TPF-C: evaluating the scientific gain from a smaller mission concept,”
Proc. SPIE, 6693 66930Q
(2007). https://doi.org/10.1117/12.733454 PSISDG 0277-786X Google Scholar
R. A. Brown and R. Soummer,
“New completeness methods for estimating exoplanet discoveries by direct detection,”
Astrophys. J., 715 122
–131
(2010). https://doi.org/10.1088/0004-637X/715/1/122 ASJOAB 0004-637X Google Scholar
D. Savransky, C. Delacroix and D. Garrett,
“EXOSIMS: Exoplanet open-source imaging mission simulator,”
(2017). https://ui.adsabs.harvard.edu/abs/2017ascl.soft06010S/abstract Google Scholar
R. W. Farquhar,
“The utilization of halo orbits in advanced lunar operations,”
1
–99
(1971). Google Scholar
R. Lougee-Heimer,
“The common optimization interface for operations research: promoting open-source software in the operations research community,”
IBM J. Res. Dev., 47 57
–66
(2003). https://doi.org/10.1147/rd.471.0057 IBMJAE 0018-8646 Google Scholar
P. T. Boggs and J. W. Tolle,
“Sequential quadratic programming,”
Acta Numer., 4 1
–51
(1995). https://doi.org/10.1017/S0962492900002518 0962-4929 Google Scholar
D. Savransky, C. Delacroix and D. Garrett,
“Multi-mission modeling for space-based exoplanet imagers,”
Proc. SPIE, 10400 104001L
(2017). https://doi.org/10.1117/12.2274098 PSISDG 0277-786X Google Scholar
D. Savransky and D. Garrett,
“WFIRST-AFTA coronagraph science yield modeling with EXOSIMS,”
J. Astron. Telesc. Instrum. Syst., 2 011006
(2015). https://doi.org/10.1117/1.JATIS.2.1.011006 Google Scholar
D. Keithly et al.,
“Scheduling and target selection optimization for exoplanet imaging spacecraft,”
Proc. SPIE, 10698 106985I
(2018). https://doi.org/10.1117/12.2311717 PSISDG 0277-786X Google Scholar
J. Krist, B. Nemati and B. Mennesson,
“Numerical modeling of the proposed WFIRST-AFTA coronagraphs and their predicted performances,”
J. Astron. Telesc. Instrum. Syst., 2
(1), 011003
(2015). https://doi.org/10.1117/1.JATIS.2.1.011003 Google Scholar
F. Fressin et al.,
“The false positive rate of Kepler and the occurrence of planets,”
Astrophys. J., 766 81
(2013). https://doi.org/10.1088/0004-637X/766/2/81 ASJOAB 0004-637X Google Scholar
D. Savransky,
“Space mission design for exoplanet imaging,”
Proc. SPIE, 8864 886403
(2013). https://doi.org/10.1117/12.2023413 PSISDG 0277-786X Google Scholar
R. K. Kopparapu et al.,
“Exoplanet classification and yield estimates for direct imaging missions,”
Astrophys. J., 856 122
(2018). https://doi.org/10.3847/1538-4357/aab205 ASJOAB 0004-637X Google Scholar
D. Garrett and D. Savransky,
“Building better planet populations for EXOSIMS,”
in AAS Meeting #231,
(2018). Google Scholar
A. Cumming et al.,
“The Keck planet search: detectability and the minimum mass and orbital period distribution of extrasolar planets,”
Publ. Astron. Soc. Pac., 120 531
–554
(2008). https://doi.org/10.1086/528885 PASPAU 0004-6280 Google Scholar
A. V. Moorhead et al.,
“The distribution of transit durations for Kepler planet candidates and implications for their orbital eccentricities,”
Astrophys. J. Suppl. Ser., 197 1
(2011). https://doi.org/10.1088/0067-0049/197/1/1 APJSA2 0067-0049 Google Scholar
E. L. Nielsen et al.,
“The Gemini planet imager exoplanet survey: giant planet and brown dwarf demographics from 10 to 100 AU,”
Astronom. J., 158
(1), 13
(2019). https://doi.org/10.3847/1538-3881/ab16e9 ANJOAA 0004-6256 Google Scholar
R. B. Fernandes et al.,
“Hints for a turnover at the snow line in the giant planet occurrence rate,”
Astrophys. J., 874 81
(2019). https://doi.org/10.3847/1538-4357/ab0300 ASJOAB 0004-637X Google Scholar
A. Howard et al.,
“The occurrence and mass distribution of close-in super-Earths, Neptunes, and Jupiters,”
Science, 330
(6004), 653
–655
(2010). https://doi.org/10.1126/science.1194854 SCIEAS 0036-8075 Google Scholar
R. Belikov et al.,
“Exoplanet occurrence rates and distributions,”
(2017). Google Scholar
D. Garrett, D. Savransky and R. Belikov,
“Planet occurrence rate density models including stellar effective temperature,”
Publ. Astron. Soc. Pac., 130
(993), 114403
(2018). https://doi.org/10.1088/1538-3873/aadff1 PASPAU 0004-6280 Google Scholar
K. L. Cahoy, M. S. Marley and J. J. Fortney,
“Exoplanet albedo spectra and colors as a function of planet phase, separation, and metallicity,”
Astrophys. J., 724 189
–214
(2010). https://doi.org/10.1088/0004-637X/724/1/189 ASJOAB 0004-637X Google Scholar
D. Savransky, E. Cady and N. J. Kasdin,
“Parameter distributions of Keplerian orbits,”
Astrophys. J., 728
(7), 66
(2011). https://doi.org/10.1088/0004-637X/728/1/66 ASJOAB 0004-637X Google Scholar
M. C. Turnbull,
“ExoCat-1: the nearby stellar systems catalog for exoplanet imaging missions,”
(2015). Google Scholar
M. J. Pecaut and E. E. Mamajek,
“Intrinsic colors, temperatures, and bolometric corrections of pre-main-sequence stars,”
Astrophys. J. Suppl. Ser., 208
(22), 9
(2013). https://doi.org/10.1088/0067-0049/208/1/9 APJSA2 0067-0049 Google Scholar
G. L. Wycoff, B. D. Mason and S. E. Urban,
“Data mining for double stars in astrometric catalogs,”
Astronom. J., 132
(1), 50
–60
(2006). https://doi.org/10.1086/504471 ANJOAA 0004-6256 Google Scholar
B. Nemati,
“Detector selection for the WFIRST-AFTA coronagraph integral field spectrograph,”
Proc. SPIE, 9143 91430Q
(2014). https://doi.org/10.1117/12.2060321 PSISDG 0277-786X Google Scholar
W. A. Traub et al.,
“Science yield estimate with the wide-field infrared survey telescope coronagraph,”
J. Astron. Telesc. Instrum. Syst., 2 011020
(2016). https://doi.org/10.1117/1.JATIS.2.1.011020 Google Scholar
C. Leinert et al.,
“The 1997 reference of diffuse night sky brightness,”
Astron. Astrophys. Suppl. Ser, 127 1
–99
(1998). https://doi.org/10.1051/aas:1998105 AAESB9 0365-0138 Google Scholar
G. Soto et al.,
“Parameterizing the search space of starshade fuel costs for optimal observation schedules,”
J. Guid. Control Dyn., 42
(2), 1
–6
(2018). https://doi.org/10.2514/1.G003747 JGCODS 0731-5090 Google Scholar
G. Soto et al.,
“Optimal starshade observation scheduling,”
Proc. SPIE, 10698 106984M
(2018). https://doi.org/10.1117/12.2311771 PSISDG 0277-786X Google Scholar
C. H. Acton,
“Ancillary data services of NASA’s navigation and ancillary information facility,”
Planet. Space Sci., 44 65
–70
(1996). https://doi.org/10.1016/0032-0633(95)00107-7 PLSSAE 0032-0633 Google Scholar
C. Acton et al.,
“A look towards the future in the handling of space science mission geometry,”
Planet. Space Sci., 150 9
–12
(2018). https://doi.org/10.1016/j.pss.2017.02.013 PLSSAE 0032-0633 Google Scholar
E. Kolemen, N. J. Kasdin and P. Gurfil,
“Quasi-periodic orbits of the restricted three-body problem made easy,”
AIP Conf. Proc., 886 68
–77
(2007). https://doi.org/10.1063/1.2710044 APCPCS 0094-243X Google Scholar
D. Keithly et al.,
“WFIRST: exoplanet target selection and scheduling with greedy optimization,”
in AAS Meeting #231,
(2018). Google Scholar
D. Keithly et al.,
“Blind search single-visit exoplanet direct imaging yield for space based telescopes,”
in AAS Meeting #233,
(2019). Google Scholar
D. Savransky, N. J. Kasdin and E. Cady,
“Analyzing the designs of planet finding missions,”
Publ. Astron. Soc. Pac., 122
(890), 401
–419
(2010). https://doi.org/10.1086/652181 PASPAU 0004-6280 Google Scholar
Lindler,
(2008). Google Scholar
D. Savransky,
“Sequential covariance calculation for exoplanet image processing,”
Astrophys. J., 800 100
–119
(2015). https://doi.org/10.1088/0004-637X/800/2/100 ASJOAB 0004-637X Google Scholar
N. S. Budden and P. D. Spudis,
“Evaluating science return in space exploration initiative architectures,”
Houston
(1993). Google Scholar
B. Nemati, J. E. Krist and B. Mennesson,
“Sensitivity of the WFIRST coronagraph performance to key instrument parameters,”
Proc. SPIE, 10400 1040007
(2017). https://doi.org/10.1117/12.2274396 PSISDG 0277-786X Google Scholar
J. E. Krist,
“End-to-end numerical modeling of AFTA coronagraphs,”
Proc. SPIE, 9143 91430V
(2014). https://doi.org/10.1117/12.2056759 PSISDG 0277-786X Google Scholar
G. Soto et al.,
“Optimization of high-inclination orbits using planetary flybys for a zodiacal light-imaging mission,”
Proc. SPIE, 10400 104001X
(2017). https://doi.org/10.1117/12.2274069 PSISDG 0277-786X Google Scholar
D. A. Vallado, Fundamentals of Astrodynamics and Applications, 4th ed.Microcosm Press, Hawthorne
(2013). Google Scholar
BiographyDean R. Keithly is currently a PhD candidate in the Sibley School of Mechanical and Aerospace Engineering at Cornell University. He received his bachelor’s degree in mechanical engineering from Michigan Technological University, where he built the Oculus-ASR satellite thermal control subsystem. He received his master’s degree in systems engineering from Cornell University, working on projects funded by the NASA Institute of Advanced Concepts. His primary research focus is on spacecraft modeling, specifically for optimizing exoplanet direct imaging missions. Dmitry Savransky is an assistant professor in the Sibley School of Mechanical and Aerospace Engineering at Cornell University and PI of the Space Imaging and Optical Systems Laboratory. He received his PhD from Princeton University in 2011, and he was a postdoctoral researcher at Lawrence Livermore National Laboratory, where he worked on integration, testing and commissioning of the Gemini Planet Imager. His research focuses on the optimization of ground and space-based exoplanet imaging surveys, control of autonomous optical systems, and advanced image processing. Christian Delacroix is a postdoctoral research scholar at the University of Liège, Belgium. He earned his PhD in astrophysics in 2013, followed by postdoctoral research appointments at Cornell University and Princeton University. He specializes in coronagraph design and adaptive optics instrumentation, and is currently involved in the ELT/METIS instrument as Liege’s local system engineer. Gabriel Soto is an aerospace engineering PhD candidate for 2020 at Cornell University, concentrating on dynamics and control. He received his BS degree in aerospace engineering and physics from the University of Miami in 2015 and his MS degree from Cornell University in 2018. He has worked in the Space Imaging and Optical Systems (SIOS) Lab since 2015 on fuel cost heuristics, formation flying, and trajectory design of spacecraft, notably starshades in exoplanet direct imaging missions. |