Open Access
26 March 2021 Exoplanet detection in starshade images
Author Affiliations +
Abstract

A starshade suppresses starlight by a factor of 1011 in the image plane of a telescope, which is crucial for directly imaging Earth-like exoplanets. The state-of-the-art in high-contrast post-processing and signal detection methods was developed specifically for images taken with an internal coronagraph system and focused on the removal of quasi-static speckles. These methods are less useful for starshade images where such speckles are not present. We are dedicated to investigating signal processing methods tailored to work efficiently on starshade images. We describe a signal detection method, the generalized likelihood ratio test (GLRT), for starshade missions and look into three important problems. First, even with the light suppression provided by the starshade, rocky exoplanets are still difficult to detect in reflected light due to their absolute faintness. GLRT can successfully flag these dim planets. Moreover, GLRT provides estimates of the planets’ positions and intensities and the theoretical false alarm rate of the detection. Second, small starshade shape errors, such as a truncated petal tip, can cause artifacts that are hard to distinguish from real planet signals; the detection method can help distinguish planet signals from such artifacts. The third direct imaging problem is that exozodiacal dust degrades detection performance. We develop an iterative GLRT to mitigate the effect of dust on the image. In addition, we provide guidance on how to choose the number of photon counting images to combine into one co-added image before doing detection, which will help utilize the observation time efficiently. All the methods are demonstrated on realistic simulated images.

1.

Introduction

A Sun-like star is much brighter (typically 108 to 1010 times) than an Earth-like planet in its habitable zone.1 Moreover, at a distance of 10 pc, the star and planets in its habitable zone are separated by around 0.1 arc sec. Thus it is difficult to separate the planet light from that of the star in the image. There are two main solutions to the challenge of imaging objects in close proximity to much brighter ones. First, one can use a coronagraph,2 which is a device inside the telescope to block the starlight from reaching the image plane. Second, one can use a starshade,3,4 which is a large screen flying on a separate spacecraft positioned between the telescope and the star being observed to suppress the starlight before it enters the telescope. In many ways, coronagraphs and starshades are complementary. Coronagraphs are efficient for high-contrast surveys, because it is easy to point the instrument to different targets. However, they result in a lower optical throughput relative to a starshade, have the difficulty for designing a coronagraph due to “exotic pupils,” and are very sensitive to wavefront perturbations. Even small aberrations introduce bright speckles, which influence the instrument’s ability for exoplanet observations.5 Thus it imposes many challenging requirements on the telescope and instruments to design coronagraphs with 1010 starlight suppression; two mission concepts under study are Habitable Exoplanet Observatory (HabEx)6 and Large UV/Optical/Infrared Surveyor.7 In comparison, starshades are good at deep imaging and spectroscopic characterization. They are not sensitive to wavefront errors and can be designed to operate over a large bandpass. The total throughput is high since the starshade does not require any internal masking of the optical beam, which makes a starshade an excellent option for deep spectroscopy, especially at small inner working angles (IWA). However, one disadvantage of a starshade is the time it takes to slew the starshade to realign it with different target stars. The starshade’s ability to efficiently suppress the on-axis starlight while maintaining high throughput makes it an excellent tool for exploring the habitability of exoplanets. A recently studied potential mission is the Starshade Rendezvous Mission: a starshade that will work with the Nancy Grace Roman Space Telescope (previously called WFIRST).8 In the mission, the starshade is launched separately and rendezvous with the telescope in orbit. Starshades are also baselined for the HabEx mission concept.6

Starshades are a new technology, still in development. Coronagraphs, however, have been used on ground-based telescopes for decades; even the Hubble Space Telescope has a rudimentary coronagraph. Available research on high-contrast imaging is mostly about image processing and signal detection for coronagraph observations, which focus on alleviating the influence of quasi-static speckles. However, quasi-static speckles are not present in starshade images, so the emphasis is on the starshade’s error sources. The dominant sources of noise in starshade images are sunlight scattering off the starshade edges9 and unsuppressed starlight caused by errors in the starshade shape.10 The scattered sunlight is confined to two extended lobes perpendicular to the direction of the Sun and is constant during observations.11 Its stability means that it could potentially be calibrated out, just adding photon noise to a small region around the starshade. Manufacturing and deployment errors and thermal deformations can distort the starshade shape and will produce bright spots in images that are hard to distinguish from a real planet signal. One example shape error we examine in this study is the truncation of the tips of starshade petals. Additional sources of noise in starshade images come from misalignment of the starshade and telescope, detector noise, and exozodiacal dust.12

Due to the difference of the noise properties in coronagraph images and starshade images, previous work on coronagraphs loses its utility on starshade images, which motivates the investigation of new techniques for starshade images. In this paper, we focus on the impact caused by errors in the starshade shape and exozodiacal dust and present an automatic detection algorithm, the generalized likelihood ratio test (GLRT), to provide robust detection on low-signal images in the presence of shape errors. We have described the GLRT model and its preliminary results for simulated images with starshade shape errors, dynamics, and detector noise in Ref. 13. We will review this detection method and introduce an iterative process to detect a planet in the presence of significant exozodiacal dust. This work focuses on signal detection without the need for postprocessing [e.g., point spread function (PSF) subtraction]. Post-processing may improve the detection performance but could also complicate the data analysis process and risks introducing artifacts into or removing part of the planetary signal. We believe demonstrating our signal detection method on unprocessed images strengthens the argument for the efficiency of our method. Detailed investigation on post-processing is beyond the scope of the paper.

In Sec. 2, we describe the image simulation process used to generate the test set for our planet detection methods. Section 3 presents the GLRT detection method and represents the bulk of this work. An iterative approach of GLRT is presented in Sec. 4 to tackle the problem of exozodiacal dust. We end by summarizing our work.

2.

Image Generation

We briefly summarize the image generation process outlined in Fig. 1. A more detailed treatment can be found in Ref. 14. The input for the image generation process is a model of the solar system viewed face-on from 10 pc away developed as part of the Haystacks Project.15 This model contains multi-wavelength image slices, covering the range from 0.3 to 2.5  μm. Each image centers on the star, extending to 50 AU from it. The star and planets are represented as single-pixel sources and the model also includes interplanetary dust and astrophysical background sources. The pixel values in the image slices are spectral flux densities.

Fig. 1

Diagram of the image simulation process with starshade system illustration (not drawn to scale). The illustration, i.e., light sources, a starshade and a telescope, on the bottom is aligned with the description above. ROI is the area defined at the simulation input, i.e., the astronomical scene, beyond which the incoming light is considered a plane wave and is not diffracted by the starshade.

JATIS_7_2_021214_f001.png

The optical model to calculate the starshade diffraction uses Fresnel diffraction theory16 to propagate light past the starshade. However, calculating the propagation of each pixel separately in Haystacks is computationally expensive. Starshades have a noticeable influence on light propagation only for a small area in close proximity to the starshade, which we call the region of influence (ROI). The ROI is defined at the input plane of the simulation and is the angular separation of a source, beyond which the incoming light is considered a plane wave and is not diffracted by the starshade. The result for an off-axis light source outside the ROI is close to the result for the same light source as if there were no starshade. Thus we only use Fresnel propagation inside the ROI and simply convolve point sources outside the ROI with the telescope’s PSF. The starshade we use in this paper is 13 m in radius and has 16 petals, shown in Fig. 2.

Fig. 2

The starshade we use in this paper, which is 13 m in radius and has 16 petals and designed for the Starshade Rendezvous Mission.8

JATIS_7_2_021214_f002.png

The image simulation incorporates the main factors that influence the image of a realistic system: a matrix represents the starshade outline; defects are added to the shape by adjusting this matrix; the alignment between the starshade and telescope is accounted for by adding time-dependent formation flying dynamics to the diffraction calculation; and a detector model for the Roman Telescope is used, which includes clock-induced charge, dark current, degradation during lifetime, polarization losses, and read noise.17

Planets are extremely faint, so their signals can be weak compared to the readout noise. To tackle this problem, the Roman Telescope uses an electron-multiplication charge-coupled device (EMCCD),17 which amplifies the signal in an electron-multiplication register. This process reduces the effective readout noise to less than one electron.18 EMCCDs introduce an additional noise source: the multiplicative noise associated with the amplification process, which can be eliminated using the detector in photon counting (PC) mode. PC mode applies a chosen threshold to the number of electrons generated in each pixel and yields a value of one if the number of electrons is larger than the threshold, zero otherwise. It is a binary process that does not measure the exact number of photons but rather the presence of photons. As PC mode cannot distinguish the event of one photon from the event of more than one photon, the exposure time is short enough so that the expected number of photons in any pixel of the detector is <1. In this way, photons are not wasted. In this work, we choose the integration time for each PC image to be 1 s so that the maximum photon rate on the detector (looking at an Earth-like planet from 10 pc) is around 0.1 photon per second per pixel.

Typically, the “binary” PC images are not used directly but rather are added together to create a final image, which we will call a co-added image. In this way, the co-added images have a large enough dynamic range so the different intensities of the sources can be well-reflected. Most of the images shown in this paper are co-added final images, unless otherwise stated. The number of PC images to combine into one co-added image is denoted by Nim. In this work, we use Nim=2000, if not specified, otherwise; guidance on how to choose Nim is provided in Sec. 3.6. The imaging field-of-view diameter is 9 arc sec and each pixel is 0.021  arcsec×0.021  arcsec. To visualize the detector’s performance, we use Monte Carlo to calculate the probability density functions (PDFs) of the photon counts for different ground-truth photon fluxes in a co-added image from 2000 PC images, shown in Fig. 3(a). Four of the PDF’s are plotted in Fig. 3(b); the parameters in the simulations are listed in Table 1. A photon rate of 0.1  photon/s (our expected maximum rate) is well within the linear response regime.

Fig. 3

PDFs of the photon counts for different ground-truth photon fluxes. (a). Each row is the PDF of photon counts observed in one pixel in a co-added image from 2000 PC images corresponding to the photon flux in that pixel. (b) Example PDF’s of the photon counts for four different ground-truth photon fluxes, which correspond to the ones in (a). The PDF of photon counts observed in one pixel in a co-added image from 2000 PC images corresponding to the photon flux 0.01, 0.1, 1, and 5  count/s in the pixel.

JATIS_7_2_021214_f003.png

Table 1

Parameters for simulation of solar system as viewed from 10 pc.

ParameterValueUnit
Sun flux45.66Jy
Venus flux2.99×108Jy
Earth flux4.85×109Jy
Venus angular separation70mas
Earth angular separation96mas
Radius of the ROI100mas
Wavelength0.55μm
Bandwidth0.12μm
Radius of the starshade13m
Separation between starshade and telescope3.72×107m
Telescope diameter2.4m
Detector’s pixel size0.021arc sec
Quantum efficiency1ph/e
Integration time1s
Clock-induced charge0.01e/pixel/frame
Dark current2×104e/pixel/s
Electron-multiplying gain2500
PC bias200e/pixel/frame
Standard deviation of readout noise100e/pixel/frame
Threshold parameter5.5
Number of PC image for one co-added image2000

Figure 4 shows one wavelength slice of a Haystacks model, along with a few results from different stages in our simulation. Figure 4(a) shows the input of our simulation, a discretized astrophysical scene of our solar system as viewed from 10 pc. The pixel size is small enough to make sure that the Haystacks models are high-fidelity spatially. The Haystacks scene includes the star, planets, interplanetary dust, and astrophysical background sources. As described in Fig. 1, the light sources, i.e., Fig. 4(a), are diffracted at the starshade plane and then propagated to the telescope’s pupil plane. A simple (aberration-free) telescope model propagates the pupil plane to the detector plane, the result of which is shown in Fig. 4(b). Then Fig. 4(b) is processed with the detector model to generate one single PC image, which is shown in Fig. 4(c). By combing 2000 PC images, we get the co-added image [Fig. 4(d)]. The dark region in the center is the starlight suppression effect of the starshade. The brighter ring is the exozodiacal dust outside the starshade’s IWA. Mars is too dim to be seen and Earth is of similar brightness as the exozodiacal dust. Venus is bright enough to be seen. In the following sections, we propose a detection and characterization method of planetary signals for co-added images at λ=0.55  μm and with a bandwidth of λ=0.12  μm.

Fig. 4

The solar system as viewed from 10 pc—and its results from different stages in our simulation. They are zoomed-in views of the images. (a) The original astronomical scene from Haystacks project15 at λ=0.55  μm in log10 scale flux (the central star is made 1010 times dimmer to reveal fainter features). (b) A simulated result before including the detector noise, with a perfect starshade at λ=0.55  μm and with a bandwidth of λ=0.12  μm. (c) One PC image. (d) the co-added image from 2000 PC images.

JATIS_7_2_021214_f004.png

3.

Generalized Likelihood Ratio Test

This section presents the GLRT as an automatic detection method for starshade missions. Our work is motivated by the lack of any previous investigation into signal detection in starshade images. We begin the section by reviewing past work on signal detection for direct imaging, most of which are specialized to coronagraphic images.

The biggest challenge for coronagraphic images is the noise floor set by quasi-static speckles. Different observing techniques and post-processing methods have been developed to try to attenuate the speckles before attempting detection. They are all based on differential imaging, which consists of estimating the star-only coronagraphic image and subtracting it from the science images (also called speckle subtraction technique). Differential imaging relies on specific observation strategies such as angular diversity (ADI),19 spectral diversity,20 or multiple reference star images21 to generate the differential signal. The various speckle subtraction methods developed for coronagraphic images may serve to improve detection in starshade images, but it is beyond the scope of this work to include them. We focus on planet detection in images that have not been post-processed and leave that work to be done in the future.

In our work, we do not post-process the images (as in speckle subtraction), so we now move to detection methods. Braems and Kasdin22,23 used hypothesis testing for the detection. The unknown parameters such as the planets’ positions and intensities can be removed by marginalizing the probability22 or using worst-case values.23 However, the choice of priors or using the worst-case values is an open question. Kasdin and Braems23 assumed a known constant background; in our method, we will estimate the background with a maximum likelihood. Mawet et al.24 (SNRt map) essentially tested if the intensity of a single test resolution is different from the other resolution elements in a 1λ/D-wide annulus at the same radius. They used small sample statistics to address the problem of the statistical significance of detection when few realizations of spatial speckles versus azimuth are present. Cantalloube et al.25 (ANDROMEDA) made a signal template considering over/self subtraction caused by ADI rather than directly using a theoretical PSF template when calculating maximum likelihood estimation (MLE). Ruffio et al.26 (FMMF) also used Signal-to-noise ratio (SNR)-based methods. They include Karhunen-Lòeve Image Projection (KLIP)-induced distortion in the match-filter template to estimate the signal intensity. The standard deviation is calculated at each pixel while masking a disk with a 5-pixel radius centered on that pixel from the annulus to prevent a planet biasing its own SNR. Flasseur et al.27 (PACO) used GLRT and used the full-covariance rather than assume independent and identically distributed in the time dimension for the different frames. However, they assumed that the covariance is the same under H0, H1 and thus calculate only one covariance. Moreover, this covariance is not derived from the Gaussian equation together with the signal intensity, and thus the result is not guaranteed to be MLE. Pairet et al.28 (STIM map) proposed STIM map, which used a modified Rician distribution, which is a heavy tail distribution compared to Gaussian, to model the speckles. Dahlqvist et al.29 (RSM map) used a Markov process to model the same pixel throughout the different images. They claimed the residual quasi-static speckles after ADI can be characterized by their mean and variance. Recent studies also took the detection problem as a binary classification problem, using random-forest classifiers (SODIRF) and deep neural networks (SODINN).30 However, these machine learning methods need very large training sets, which are difficult to generate for unknown planet signals, and require a heuristic tuning of hyper-parameters.

In this study, we also take the detection problem as a “hypothesis testing” problem and use a GLRT. The null hypothesis (H0) is that there is no planet; the alternative hypothesis (H1) is that there is a planet. We compare the posterior probabilities under the two hypotheses to decide whether to reject the null. Instead of marginalization, we use MLE to first estimate the unknown parameters (intensity, position, and standard deviation of the noise) and then use them to calculate the likelihood ratio. Using this ratio, even when the maximum likelihood under the alternative hypothesis is low, we may still have a strong detection as long as the ratio is high (i.e., the pattern is much less likely from pure noise). We first introduced the GLRT model and its preliminary results in Ref. 13. In this section, we briefly describe the method, show recent improvements, and present results.

3.1.

GLRT Model for the Whole Image

The model for an image containing multiple planet signals, background, and noise and is given by

Eq. (1)

I=i=1Nxj=1Ny[αi,jPi,j]+b+ω,
where I is the matrix for an image; αi,j is the intensity of signals at pixel (i,j) in unit of Jy (the value is zero if there is no signal at (i,j)); Pi,j is the matrix denoting the values of a normalized PSF centered at (i,j); b is the matrix for background, which contains star residual light and bias from the detector; ω is the matrix for noise; and Nx,Ny are the number of pixels in x and y axes.

We assume that the noise in different pixels is an independent and identically distributed Gaussian random variable with mean zero and unknown constant variance. As each final image is the combination of many PC images, a Gaussian distribution should be a good approximation for the noise due to the central limit theorem. As a Gaussian distribution is used as an approximation for the true underlying distribution, the estimation is called Gaussian quasi-maximum likelihood estimation (QMLE). As long as the quasi-likelihood function is not oversimplified, the QMLE is consistent and asymptotically normal. It is less efficient than the MLE, but may only be slightly less efficient if the quasi-likelihood is constructed so as to minimize the loss of information relative to the actual likelihood.31 We can then calculate the QMLE of αi,j and b, which is equivalent to solving the optimization problem:

Eq. (2)

minαi,j,bIi=1Nxj=1Ny[αi,jPi,j]b2.

This is an under-determined problem as we have 2NxNy unknown parameters, i.e., αi,j,b and only NxNy data points, i.e., all the pixel values. Even assuming that we have a constant background in the whole image, we still have NxNy+1 unknown parameters. Moreover, the assumption of constant background is not ideal as the background may have local features. To approach the problem, we use the idea of divide-and-conquer. If we assume no overlapping signals in the image, we are able to individually analyze smaller search areas that are the size of the PSF core.

3.2.

GLRT Model for a Search Area

In a small search area with the size of the PSF core, we can assume that there is only one planet signal. We also assume the background is constant, which should be a reasonable assumption over a small area. Moreover, noises are independent and identically distributed Gaussian random variables. Thus the model for a search area is

Eq. (3)

xi,j=αi,jPi,j+bi,j1+ωi,j,
where xi,j denotes the search area centered on the global pixel (i,j); bi,j is the background intensity; and ωi,j denotes the Gaussian noise in this area. We only use the central core of the PSF for Pi,j centered at (i,j). The model states that the area we are observing xi,j contains a signal centered at the center of this area along with constant background light and Gaussian noise. If there is no planet signal, αi,j should be zero. We stack pixel values in this target area into a column vector for easier mathematical manipulation. We do the same to the reference PSF and the noise matrix. The local model can be reformulated as a classical linear model:

Eq. (4)

Gi,j=[Pi,j(1)1Pi,j(2)1Pi,j(N)1],θi,j=[αi,jbi,j],

Eq. (5)

xi,j=Gi,jθi,j+ωi,j,
where xi,j is the vectorized target area centered at (i,j); N is the number of pixels in the area; bi,j is the constant background; Pi,j(m) is the value at the m’th pixel in the known vectorized template PSF centered at (i,j) for a point source signal of normalized intensity; αi,j is the planet’s intensity; and ωi,j is an N×1 noise vector. θi,j is unknown.

The conditional probability of this search area can be written as

Eq. (6)

L(θi,j,σi,j2)=p(xi,j|θi,j,σi,j2)=1(2πσi,j2)N/2exp(12σi,j2xi,jGθi,j2).
As the data xi,j are known and the parameters θi,j,σi,j2 are unknown, this probability function is a likelihood function for the unknown parameters [so it is also denoted as L(θi,j,σi,j2) above]. Taking the natural logarithm of both sides of Eq. (6), the log-likelihood of the search area in the co-added image is

Eq. (7)

l(θi,j,σi,j2)=ln(L(θi,j,σi,j2))=N2ln(2π)N2ln(σi,j2)12σi,j2xi,jGθi,j2.
We maximize the log-likelihood function (equivalently maximizing the likelihood) to find the maximum likelihood estimator (the subscripts i,j for the estimated parameters are left out for simplicity):

Eq. (8)

(θ^1,σ^12)=argmaxθi,j,σi,j2l(θi,j,σi,j2).
As mentioned previously, a Gaussian distribution is used as an approximation for the true underlying distribution; the estimation is also QMLE. The resulting estimation under H1 is

Eq. (9)

θ^1=(Gi,jTGi,j)1Gi,jTxi,j.
And the estimated variance under H1 is

Eq. (10)

σ^12=1N(xi,jGi,jθ^1)T(xi,jGi,jθ^1).
Meanwhile, the QMLE under H0 is

Eq. (11)

θ^0=θ^1(Gi,jTGi,j)1AT[A(Gi,jTGi,j)1AT]1(Aθ^1),
which is obtained from solving the constrained optimization problem:

Eq. (12)

minθxi,jGi,jθ2,s.t.  Aθ=0,
where A=[1,0]. And the estimated variance under H0 is

Eq. (13)

σ^02=1N(xi,jGi,jθ^0)T(xi,jGθ^0).

In the problem of parameter estimation, we obtain information about the unknown parameters from the observed data of the random variables from the probability distribution governed by the parameters. The Fisher information matrix is a way to quantify the amount of information that the observable random variables carry about the unknown parameters. The definition of the Fisher information matrix is

Eq. (14)

I(ϕ)=varϕ{l(ϕ)}=Eϕ{2l(ϕ)},
where the notation “var” means the variance; “E” means expectation; and ϕ is the vector of unknown parameters and ϕ=(αi,j,bi,j,σ12)T for the alternative hypothesis H1. In our case (a linear model), the Fisher information matrix reduces to32

Eq. (15)

I(ϕ)=(E(2lθ1θ1T)00E(2l(σ12)2)).
Due to the block structure of I(ϕ), the variance–covariance of θ^G,1 can be estimated by IθG,11,32 where

Eq. (16)

Iθ^1=2l(θ^1,σ^12)θ1θ1T=Gi,jTGi,jσ^12,
and we can also derive the confidence intervals (CIs) for the QMLE:33

Eq. (17)

α^i,j±z(Iθ^11)11,
and

Eq. (18)

b^i,j±z(Iθ^11)22,
where z is the appropriate critical value (for example, 1.96 for 95% confidence), and the notation (Iθ^11)ii means that we invert the matrix Iθ^1 first, and then take the ii component of the inverted matrix. The variance of σ^12 is estimated by I1(σ^12),32 where

Eq. (19)

I(σ^12)=2l(θ^1,σ^12)(σ12)2=N2σ^14.

3.3.

Detection in a Search Area

The detection problem becomes a hypothesis testing problem:

Eq. (20)

H0:Aθi,j=αi,j=0versusH1:Aθi,j=αi,j0,
where A=[1,0].

To decide which hypothesis is true, it is intuitive to compare the posterior probability of H1 and H0 when xi,j occurs. We use Bayes’ rule and define the odds ratio:

Eq. (21)

O(xi,j)=P(H1|xi,j)P(H0|xi,j)=f(z)P(xi,j|αi,j=z,bi,j,σi,j)dzP(αi,j=0)P(xi,j|αi,j=0,bi,j,σi,j),
where f(z) is the PDF of the planet intensity and σi,j is the standard deviation of the noise. We should decide H1 is true if the odds ratio is larger than one.22 However, we do not know f(z), bi,j, or σi,j. It is reasonable to compare

Eq. (22)

R(xi,j)=maxαi,j,bi,j,σi,jP(xi,j|αi,j,bi,j,σi,j)maxbi,j,σi,jP(xi,j|αi,j=0,bi,j,σi,j)
to a threshold. This test is called the GLRT. Moreover, not only is the planet intensity αi,j unknown, so are the background intensity and noise statistics. We use the QMLE mentioned above. Then the ratio becomes

Eq. (23)

R(xi,j)=P(xi,j|θ^i,j,1,σ^i,j,1;H1)P(xi,j|θ^i,j,0,σ^i,j,0;H0),
where θ^i,j,k,σ^i,j,k are the estimated values under hypothesis Hk, {k=0,1}.

To simplify further, H1 is favored, i.e., a planet is more likely to exist at the test location, if

Eq. (24)

T(xi,j)=(N2)(R(xi,j)2N1)=(N2)σ^i,j,02σ^i,j,12σ^i,j,12=(N2)θ^i,j,1TAT[A(Gi,jTGi,j)1AT]1Aθ^i,j,1xi,jT(1NGi,j(Gi,jTGi,j)1Gi,jT)xi,j>γ,
where the threshold γ is based on the detection performance.34 The probability of false alarms (FAs), also called false alarm rate and false positive rate, PFA and the probability of detection, also called true positive rate, PD are given by

Eq. (25)

PFAi,j=T(xi,j)>γp(xi,j|H0)dxi,j=QF1,N2(γ),

Eq. (26)

PDi,j=T(xi,j)>γp(xi,j|H1)dxi,j=QF1,N2(λi,j)(γ),
where Q is the probability of exceeding a given value; F1,N2 is an F distribution with one numerator degree of freedom and N2 denominator degrees of freedom; and F1,N2(λi,j) is a noncentral F distribution with one numerator degree of freedom and N2 denominator degrees of freedom and noncentrality parameter λi,j.34 λi,j is given by

Eq. (27)

λi,j=θi,j,1TAT[A(Gi,jTGi,j)1AT]1Aθi,j,1σ2,
where θi,j,1 is the true value under H1 and σ2 is the true variance of the noise. This tells us that the probability of a FA only depends on the threshold, but the probability of detection depends on the planet intensity. The brighter the planet is, the higher the detection probability is.

3.4.

Detection in the Whole Image

We have built a model and corresponding detection and estimation method for a search area that is the size of the PSF core. However, the potential planets’ locations are unknown, so to perform detection in the whole image, we will traverse the whole image using the method outlined in Sec. 3.3. For each pixel, we use the detection area centered at this pixel, that is to say, we test H1: there is a planet centered at this pixel, against H0: there is only constant background there. After calculating for all the pixels in the image, we choose an FA rate and apply its corresponding threshold. Examples are illustrated in Fig. 5. As the PSF changes with the distance from the starshade, our detection algorithm uses a library of reference PSFs.

Fig. 5

Example of the GLRT detection on an image with perfect starshade and without exozodiacal dust. (a) Noiseless image: it should be taken as ground truth for image processing. (b) Image with detector noise: examples of search areas are also shown. The white box is centered on pixel (7,12), which is marked by the white asterisk. The pixel values in the white box form data x7,12 in Eq. (3). The magenta box forms data x8,14 and is the case where the search area is at the edge of the PSF. (c) The T values from Eq. (24) in each pixel. (d) FA rate map. (e) binary detection image after thresholding. We apply a threshold of 0.7354 to the T map, which results in 0.4 FA rate. The white circles are the minimal bounding circles used to estimate the planet position. (f) The relationship between threshold and FA rate from Eq. (25).

JATIS_7_2_021214_f005.png

When the pixel is at the boundary of the PSF in the image, [one example is shown as a magenta asterisk in Fig. 5(b)], the detection area only contains part of the planet which is not centered at the pixel, so neither H1 nor H0 is true and the MLE of planet intensity can be negative. Thus we set those negative estimates as zero and thus T(xi,j)=0.

After thresholding, we get a binary image. Generally speaking, some pixels next to the signal center will also be detected. Thus to estimate the position of the planet, we first find the convex hulls in the thresholded image and find the minimal circle bounding each convex hull. The center of the circles will be the estimates for the planets’ positions. One example is shown in Fig. 5(e). The estimated planet intensity is the MLE of I at the estimated planet position. As we have shown in our previous work,14 the PSF changes with the distance away from the starshade center. If we use only one PSF template in GLRT, normally the one without a starshade, we can have a higher FA rate, worse position estimation, and worse intensity estimation. Thus we need to have a library of PSFs at difference distances from the starshade center for the GLRT model. In this paper, we define intensity error=(estimated intensityreal intensity)real intensity and report its value for all cases.

3.5.

Results

Two detection and estimation examples are presented in Figs. 5 and 6. In Fig. 5, the detection process is demonstrated step by step with an image with perfect starshade. In Fig. 6, we demonstrate the GLRT’s ability to distinguish real signals from fake ones by providing the detection result for an image with a starshade with truncated tips. Compared to the perfect starshade, the tip of one of this starshade’s petals is truncated by 6.5 mm and resulting in a tip width of 48.3035 mm (the radius of the designed starshade is 13 m and the petal tip width is 48.3216 mm). We refer to this starshade as the clipped starshade. This defect causes a bright spot in the image, which could be mistaken for a planet. We still use the PSF templates of the perfect starshade for this clipped starshade case. GLRT successfully detects Venus and Earth from the fake signal. The errors of intensity and position estimation for these two examples are presented in Table. 2. The fake signal is close to Venus, so the intensity estimation of Venus is degraded for the clipped starshade case. The pixel size of the images is 0.021 arc sec, so the position estimation is accurate to the pixel level.

Fig. 6

Example of the GLRT detection on an image with clipped starshade. (a) Noiseless image: the defect on the starshade causes a bright spot that resembles a planet; (b) image with detector noise applied; (c) the T values from Eq. (24) in each pixel; (d) FA rate map; and (e) binary detection image after thresholding. We apply a threshold of 0.7354 to the T map, which results in 0.4 FA rate. GLRT successfully detects Venus and Earth and ignores the fake signal in the image.

JATIS_7_2_021214_f006.png

Table 2

Intensity estimation error and position estimation error comparison between results in images using different starshades.

StarshadeIntensity error for Venus (%)Intensity error for Earth (%)Position error for Venus (arc sec)Position error for Earth (arc sec)
Perfect starshade1.51.23×1039.5×103
Clipped starshade20.54.13×1039.5×103

We also calculate the receiver operating characteristic (ROC) curves for Venus and Earth shown in Fig. 7, where we compare the performance of GLRT for co-added images with different numbers of total images, which is denoted as Nim. Equations (25) and (26) give the theoretical FA rate and true positive (TP) rate under a Gaussian assumption and are therefore only an approximation. Moreover, as shown in Eq. (27), the calculation of the TP rate needs the value of the true variance, which we need to estimate. Thus to more accurately demonstrate the detection’s performance, we use a Monte Carlo simulation. To calculate the ROC curves, we apply GLRT to get the FA rate map for a set of different thresholds and record if it results in a detection or missed detection of Earth, Venus, and a background pixel. We run a large number of trials, which is denoted as ntrials and record the ratio of detection of Earth and Venus as the TP rate for Earth and Venus and record the ratio of detection of the background pixel as the false positive rate. The CI of a proportion p^ is35

Eq. (28)

(p^zp^(1p^)ntrials,p^+zp^(1p^)ntrials).
Thus for a point (p^false,p^positive) on a ROC curve, we take the CI using the two points:

Eq. (29)

(p^falsezp^false(1p^false)ntrials,p^positive+zp^positive(1p^positive)ntrials),
and

Eq. (30)

(p^false+zp^false(1p^false)ntrials,p^positivezp^positive(1p^positive)ntrials).
For each ROC curve, we apply these two boundaries to calculate the shaded area as CI. ROC curves for Venus and Earth with different numbers of PC images to combine into one co-added image Nim (with different integration times) are shown in Fig. 7. As Venus is brighter than Earth, the performance for Venus is better than that for Earth. Increasing the integration time has the similar effect as increasing the planet intensity because both increase the expected number of photons arriving on the pixel. Moreover, the performance for both Venus and Earth is better with the higher number of PC images in the co-added image.

Fig. 7

ROC with CI for Venus and Earth using GLRT with different integration times. (a) ROC for co-added images each from Nim=200 PC images. The curves for an integration time of 10 s overlaps with the plot’s boundary, which indicates perfect detection performance. ROC for co-added images each from (b) Nim=700 PC images and (c) Nim=2000 PC images. The shaded region behind each ROC curve is its 95% CI.

JATIS_7_2_021214_f007.png

3.6.

Optimal Number of PC Images for One Co-Added Image

As we mentioned before, the number of PC images to combine into one co-added image Nim is an important hyperparameter to be chosen before doing detection via most of the methods. For example, GLRT’s performances vary with the number of PC image for one co-added image, as shown in Fig. 7. With too small Nim, the TP rate and FA rate for the detection may not be desirable. With too big Nim, precious observation time would be wasted. To choose the best Nim, we can utilize the ROC curves.

First, given integration time, three parameters need to be specified: the minimum planet intensity to be detected, the maximum FA rate that can be accepted, and the minimum TP rate that is acceptable. Then for a different Nim, the ROC is calculated via Monte Carlo simulation. Finally, the minimum Nim that can reach the requirements are chosen. For example, if we assume that the dimmest planet has the same intensity as Earth, the maximum acceptable FA rate is 0.16 and the minimum acceptable TP rate is 0.85. The acceptable FA rate and TP rate pairs are in the shaded green region in Fig. 8. We calculate the ROCs with different Nim for integration time 1 s and find that the ROC of Nim=700 is the first one to reach the green region in Fig. 8. Thus we choose Nim=700 as the optimal number of PC images to be co-added into the final image.

Fig. 8

ROC with CI for Earth using GLRT with different Nim. The horizontal green dotted line is the acceptable TP rate and the vertical one is the acceptable FA rate region. Thus the acceptable true TP and acceptable FA region is the shaded green area. The ROC with Nim=700 is the first ROC reaching the acceptable region.

JATIS_7_2_021214_f008.png

3.7.

Comparison with Other Methods

We also compared GLRT with the performance of the detection method based on SNR map implemented in pyKLIP.36 The algorithm computes the standard deviation in concentric annuli after masking the signal area in question as the level of noise in SNR. The width of the annuli used is the diameter of the PSF core in this paper. The SNR maps for the three example co-added images in Figs. 5(b) and 6(b) are shown in Fig. 9 to help visually compare the performance with the GLRT method. ROC curves are also calculated, shown in Fig. 10. The calculation uses the same set of images as the ones used for Fig. 7. As it is hard to visually compare the curves, we list the area under the ROC curve (AUC) for all the curves in Table. 3. AUC is an aggregate measure of performance across all possible thresholds. It can be interpreted as the probability that the model ranks a random positive example higher than a random negative example. AUC is 1 if the model’s decisions are all correct and 0 if all wrong. The disadvantage of this SNR definition is that the standard deviation calculation will be biased by the presence of point sources in annuli. In our case, the radius of Venus and Earth from the image center is close, so the signals are partially in each other’s annuli for the calculation of the noise standard deviation. Thus the SNR is biased, which is validated by the deteriorating performance for Earth when signals become stronger due to increased integration time or Nim, shown in Figs. 10(b) and 10(c). Overall, GLRT outperforms SNR method on images that have not been post-processed; it is beyond the scope of this work to examine the effects of post-processing on each detection method.

Fig. 9

The SNR map calculated with the pyKLIP package:37 result for the image in (a) Fig. 5(b) and (b) Fig. 6(b).

JATIS_7_2_021214_f009.png

Fig. 10

ROC with CI for Venus and Earth using pyKLIP with different integration times. (a)–(c) The result where Earth and Venus are at the real locations as shown in Fig. 5(a). (d)–(f) The result where we eliminate the impact of planets for each other: when calculating the ROCs for Earth, we did not include Venus in the images; when calculating the ROCs for Venus, we did not include Earth in the images. (a), (d) ROC for co-added images each from Nim=200 PC images; (b), (e) ROC from Nim=700 PC images; and (c), (f) ROC from Nim=2000 PC images.

JATIS_7_2_021214_f010.png

Table 3

Comparison of AUC for GLRT and SNR method from pyKLIP.36 Unimpacted means only one planet is included in the image simulation.

Venus 10 sEarth 10 sVenus 1 sEarth 1 sVenus 0.5 sEarth 0.5 s
GLRT, 200 PC110.98830.73740.88800.5797
GLRT, 700 PC1110.95030.99630.7490
GLRT, 2000 PC1110.998710.9275
SNR, 200 PC10.82990.97140.69530.88040.6219
SNR, 700 PC10.50590.99930.71590.97990.6863
SNR, 2000 PC10.032510.45540.99850.6112
Unimpacted SNR, 200 PC10.99990.97680.75950.88700.6529
Unimpacted SNR, 700 PC110.99970.90690.98480.7711
Unimpacted SNR, 2000 PC1110.98060.99950.8753

4.

Iterative GLRT for Exozodiacal Dust

Exozodiacal dust is debris in the habitable zones of stars believed to come from extrasolar asteroids and comets.12 Though the true structure of exozodiacal dust is unknown, as a first attempt, we simply assume it is axisymmetric, which is believed to be a reasonable approximation for small dust grains.38 When the intensity of exozodiacal dust is similar to that of a target planet, the methods mentioned above have difficulty detecting the planet’s signal. We develop here an iterative GLRT to tackle this problem. It is essentially an expectation–maximization algorithm.39 The planets’ signals and the exozodiacal dust are both unknown and need to be estimated. However, it is hard to estimate them accurately at the same time, and their estimation can influence each other. The solution is to iteratively estimate either the planets’ signal or the exozodiacal dust first and then use the estimation as a known factor to estimate the other until both estimates converge.

When there is exozodiacal dust, the model in Eq. (1), specified at pixel (x,y), becomes

Eq. (31)

I(x,y)=i=1Nxj=1Ny[αi,jPi,j(x,y)]+b(x,y)+d(x,y)+ω(x,y),
where d(x,y) is the exozodiacal dust at pixel (x,y), which is also unknown. We now have 3NxNy unknown parameters.

The exozodiacal dust degrades the detection and estimation. For example, if we directly apply the GLRT method introduced in the previous section for Fig. 4(d), we get a confusing FA rate map in Fig. 11(a); it is hard to distinguish Earth from the exozodiacal dust.

Fig. 11

Example of iterative GLRT applied to Fig. 4(d): (a) T value map for Fig. 4(d). (b) The median value for each radius of Fig. 4(d), i.e., the exozodiacal dust estimation d* at initial step. (c) The residual Ib after subtracting d* from I, i.e., the residual after subtracting (b) from Fig. 4(d). It is the initial estimation for the underlying image with only planets. (d) T map for (c). (e) The new estimate of planets i=1Nxj=1Ny[αi,jPi,j(x,y)]. After applying GLRT on (c) and get detection, we also obtained the intensity and position estimation of the planets. (f) Exozodiacal dust Ip after subtracting estimated planet signals (e) and estimated local background from the original image Fig. 4(d). (g) The dust estimation at the final step. (h) The final residual Ib, i.e., the residual after subtracting (g) from Fig. 4(d). It is the final estimation for the underlying image with only planets and estimated local background. (i) T value map for (h).

JATIS_7_2_021214_f011.png

To reduce the number of unknowns, we assume that the dust is nearly axisymmetric, which may be a reasonable approximation for small dust grains.38 Thus

Eq. (32)

I(x,y)=i=1Nxj=1Ny[αi,jPi,j(x,y)]+b(x,y)+d(r)+ω(x,y),
where r=x2+y2. Then Eq. (2) becomes

Eq. (33)

minα,b,dIi=1Nxj=1Ny[αi,jPi,j]bd2.
We cannot directly split the whole image into smaller areas and do detection separately like Eq. (9) as the estimation of dust in one area also depends on other areas at the same radii from the center. To tackle this, we split the estimation of signals and exozodiacal dust into two steps. First, we take the median of the values for each radius to estimate the background. We use the median rather than mean to avoid the influence of the existence of planet signals at some radius [an example is shown in Fig. 11(b)]. This is equivalent to solving for the optimization problem for each r:

Eq. (34)

d*(r)=argmind(r)I(r)d(r)1.
The * denotes the estimate of the corresponding parameter. Then we subtract the estimated background, which contains bright exozodiacal dust:

Eq. (35)

Ib=Id*(r).
An example is shown in Fig. 11(c). Then applying GLRT on this image Ib produces the T value map in Fig. 11(d) and provides an estimation of the planets’ positions and intensities, as shown in Fig. 11(e). The estimated planets are subtracted to get a better estimation of the background, as shown in Fig. 11(f) and the process is repeated iteratively. The procedure is summarized in Fig. 12 and the complete example is shown in Fig. 11. In Table 4, we summarize the intensity and position estimation error for the example. The ROC curves are shown in Fig. 13. The performance is undermined a little by the dust, compared to that without dust.

Fig. 12

The flowchart describes the process of iterative GLRT. The blue box is the initialization step.

JATIS_7_2_021214_f012.png

Table 4

Intensity and position estimation error for Fig. 4(f) via iterative GLRT methods

PlanetIntensity error (%)Position error (mas)
Venus5.921
Earth38.330

Fig. 13

The ROCs for the iterative GLRT. The ROCs without dust from Fig. 7 are added here for reference. The integration time here is 1 s. ROC for co-added images each from (a) Nim=200 PC images; (b) Nim=700 PC images; and (c) Nim=2000 PC images.

JATIS_7_2_021214_f013.png

5.

Conclusion

A starshade is a promising instrument for the direct imaging of Earth-like planets. In this paper, we briefly describe our process for simulating realistic starshade images and preliminary study of signal detection in starshade images, which no previous work has looked into. The core detection and estimation part are done by GLRT. We first obtain intensity estimates by QMLE. Then the likelihood ratio with respect to estimated parameters is calculated. After choosing an FA rate, we can threshold the image and detect the planets. For cases with exozodiacal dust, we split the process into two parts: dust estimation and signal estimation and use GLRT iteratively. Examples using these methods are shown in Secs. 3 and 4. The GLRT method successfully and efficiently flags potential planets with a concrete FA rate. It can help distinguish planet signals from artifacts caused by small starshade shape errors, such as a truncated petal tip. In addition, we provide a guidance to choose the best number of PC images to combine into one co-added image Nim, utilizing the ROC curves. This will help utilize the observation time efficiently.

Due to the limitation of Gaussian approximation for the noise distribution in the image, Gaussian GLRT introduces detection performance improvement but not drastically, compared to the SNR method commonly used in high-contrast imaging. We have worked on an improved version of the GLRT method based on the accurate model for PC images rather than approximation and thus advances the detection performance;40 we present the most recent result on this in Ref. 41. The performance can be further improved if we have prior knowledge about the probability distribution of the planets’ intensity in Eq. (21), which may be available after future exoplanet surveys. In this work, we present the iterative GLRT assuming face-on, uniform exozodiacal dust, but the same concept can be applied to more detailed models of the dust structure. In this work, spectral information is not discussed. However, the method introduced in this paper can be applicable to different cases. If images at different wavelengths are taken, the product of the likelihood at different wavelengths will be the final likelihood for these images. Then MLE and GLRT can be calculated, and thus a detection decision can be made.

Acknowledgments

This work was supported by Caltech-JPL NASA (Grant No. NNN12AA01C). The authors would like to thank the anonymous reviewers for their many helpful comments and suggestions. A. H. is a guest co-editor of this starshade special section.

References

1. 

W. A. Traub, “Direct imaging of exoplanets,” Exoplanets, 111 –156 University of Arizona Press, Tucson, AZ (2010). Google Scholar

2. 

G. Chauvin et al., “A companion to AB Pic at the planet/brown dwarf boundary,” Astron. Astrophys., 438 (3), L29 –L32 (2005). https://doi.org/10.1051/0004-6361:200500111 AAEJAF 0004-6361 Google Scholar

3. 

L. Spitzer, “The beginnings and future of space astronomy,” Am. Sci., 50 (3), 473 –484 (1962). AMSCAC 0003-0996 Google Scholar

4. 

W. Cash, “Detection of Earth-like planets around nearby stars using a petal-shaped occulter,” Nature, 442 51 –53 (2006). https://doi.org/10.1038/nature04930 Google Scholar

5. 

H. Sun, N. Kasdin and R. Vanderbei, “Identification and adaptive control of a high-contrast focal plane wavefront correction system,” J. Astron. Telesc. Instrum. Syst., 4 (4), 049006 (2018). https://doi.org/10.1117/1.JATIS.4.4.049006 Google Scholar

6. 

D. Redding et al., “A Habitable Exoplanet Observatory (HabEx) starshade-only architectures,” Proc. SPIE, 11115 111150V (2019). https://doi.org/10.1117/12.2529646 PSISDG 0277-786X Google Scholar

7. 

L. Pueyo et al., “The LUVOIR Architecture ‘A’ coronagraph instrument,” Proc. SPIE, 10398 103980F (2017). https://doi.org/10.1117/12.2274654 PSISDG 0277-786X Google Scholar

8. 

S. Seager et al., “Starshade rendezvous probe,” (2019). https://ntrs.nasa.gov/archive/nasa/ casi.ntrs.nasa.gov/20190028272.pdf Google Scholar

9. 

S. Martin et al., “Starshade optical edge modeling, requirements, and laboratory tests,” Proc. SPIE, 8864 88641A (2013). https://doi.org/10.1117/12.2024188 PSISDG 0277-786X Google Scholar

10. 

S. B. Shaklan et al., “Error budgeting and tolerancing of starshades for exoplanet detection,” Proc. SPIE, 7731 77312G (2010). https://doi.org/10.1117/12.857591 PSISDG 0277-786X Google Scholar

11. 

E. Hilgemann et al., “Starshade technology development activity, milestone 3,” (2019) https://exoplanets.nasa.gov/internal_resources/1544 Google Scholar

12. 

A. Roberge et al., “The exozodiacal dust problem for direct observations of exo-Earths,” Publ. Astron. Soc. Pac., 124 (918), 799 (2012). https://doi.org/10.1086/667218 PASPAU 0004-6280 Google Scholar

13. 

M. Hu, A. Harness and N. Kasdin, “Image processing methods for exoplanets detection and characterization in starshade observations,” Proc. SPIE, 10698 106985K (2018). https://doi.org/10.1117/12.2312091 PSISDG 0277-786X Google Scholar

14. 

M. Hu et al., “Simulation of realistic images for starshade missions,” Proc. SPIE, 10400 104001S (2017). https://doi.org/10.1117/12.2273404 PSISDG 0277-786X Google Scholar

15. 

A. Roberge et al., “Finding the needles in the haystacks: high-fidelity models of the modern and archean solar system for simulating exoplanet observations,” Publ. Astron. Soc. Pac., 129 (982), 124401 (2017). https://doi.org/10.1088/1538-3873/aa8fc4 PASPAU 0004-6280 Google Scholar

16. 

D. Sirbu, “Occulter-based high-contrast exoplanet imaging: design, scaling, and performance verification,” Princeton, NJ (2014). Google Scholar

17. 

M. J. Rizzo et al., “Simulating the WFIRST coronagraph integral field spectrograph,” Proc. SPIE, 10400 104000B (2017). https://doi.org/10.1117/12.2273066 PSISDG 0277-786X Google Scholar

18. 

M. Hirsch et al., “A stochastic model for electron multiplication charge-coupled devices—from theory to practice,” PLoS One, 8 (1), e53671 (2013). https://doi.org/10.1371/journal.pone.0053671 POLNCL 1932-6203 Google Scholar

19. 

C. Marois et al., “Angular differential imaging: a powerful high-contrast imaging technique,” Astrophys. J., 641 (1), 556 (2006). https://doi.org/10.1086/500401 ASJOAB 0004-637X Google Scholar

20. 

R. Racine et al., “Speckle noise and the detection of faint companions,” Publ. Astron. Soc. Pac., 111 (759), 587 (1999). https://doi.org/10.1086/316367 PASPAU 0004-6280 Google Scholar

21. 

D. Lafrenière et al., “HST/NICMOS detection of HR 8799 b in 1998,” Astrophys. J. Lett., 694 (2), L148 (2009). https://doi.org/10.1088/0004-637X/694/2/L148 AJLEEY 0004-637X Google Scholar

22. 

I. Braems and N. J. Kasdin, “Bayesian hypothesis testing for planet detection,” (2004). Google Scholar

23. 

N. Kasdin and I. Braems, “Linear and Bayesian planet detection algorithms for the terrestrial planet finder,” Astrophys. J., 646 (2), 1260 (2006). https://doi.org/10.1086/505017 ASJOAB 0004-637X Google Scholar

24. 

D. Mawet et al., “Fundamental limitations of high contrast imaging set by small sample statistics,” Astrophys. J., 792 (2), 97 (2014). https://doi.org/10.1088/0004-637X/792/2/97 ASJOAB 0004-637X Google Scholar

25. 

F. Cantalloube et al., “Direct exoplanet detection and characterization using the ANDROMEDA method: performance on VLT/NaCo data,” Astron. Astrophys., 582 A89 (2015). https://doi.org/10.1051/0004-6361/201425571 AAEJAF 0004-6361 Google Scholar

26. 

J. Ruffio et al., “Improving and assessing planet sensitivity of the GPI exoplanet survey with a forward model matched filter,” Astrophys. J., 842 (1), 14 (2017). https://doi.org/10.3847/1538-4357/aa72dd ASJOAB 0004-637X Google Scholar

27. 

O. Flasseur et al., “An unsupervised patch-based approach for exoplanet detection by direct imaging,” in 25th IEEE Int. Conf. Image Process., 2735 –2739 (2018). https://doi.org/10.1109/ICIP.2018.8451431 Google Scholar

28. 

B. Pairet et al., “STIM map: detection map for exoplanets imaging beyond asymptotic Gaussian residual speckle noise,” Mon. Not. R. Astron. Soc., 487 (2), 2262 –2277 (2019). https://doi.org/10.1093/mnras/stz1350 MNRAA4 0035-8711 Google Scholar

29. 

C. Dahlqvist, F. Cantalloube and O. Absil, “Regime-switching model detection map for direct exoplanet detection in ADI sequences,” Astron. Astrophys., 633 A95 (2020). https://doi.org/10.1051/0004-6361/201936421 AAEJAF 0004-6361 Google Scholar

30. 

C. Gonzalez, O. Absil and M. V. Droogenbroeck, “Supervised detection of exoplanets in high-contrast imaging sequences,” Astron. Astrophys., 613 A71 (2018). https://doi.org/10.1051/0004-6361/201731961 AAEJAF 0004-6361 Google Scholar

31. 

D. R. Cox and N. Reid, “A note on pseudolikelihood constructed from marginal densities,” Biometrika, 91 729 –737 (2004). https://doi.org/10.1093/biomet/91.3.729 BIOKAX 0006-3444 Google Scholar

32. 

“Maximum likelihood estimation in a Gaussian regression model,” http://sia.webpopix.org/regressionML.html#the-fim-for-a-regression-model Google Scholar

33. 

T. Hastie, R. Tibshirani and J. Friedman, The Elements of Statistical Learning, 111 –156 Springer Science+Business Media, New York (2009). Google Scholar

34. 

S. Kay, Fundamentals of Statistical Signal Processing, Vol. 1: Estimation Theory, Prentice Hall, Upper Saddle River, NJ (1993). Google Scholar

35. 

J. P. M. de Sá, Applied Statistics Using SPSS, STATISTICA, MATLAB and R, 92 –93 Springer, Berlin, Heidelberg, New York (2007). Google Scholar

36. 

J. Wang et al., “pyKLIP: PSF subtraction for exoplanets and disks,” (2015). Google Scholar

37. 

A. Amara and S. Quanz, “PYNPOINT: an image processing package for finding exoplanets,” Mon. Not. R. Astron. Soc., 427 (2), 948 –955 (2012). https://doi.org/10.1111/j.1365-2966.2012.21918.x MNRAA4 0035-8711 Google Scholar

38. 

M. Kuchner and C. Stark, “Collisional grooming models of the Kuiper Belt dust cloud,” Astrophys. J., 140 (4), 1007 (2010). https://doi.org/10.1088/0004-6256/140/4/1007 ASJOAB 0004-637X Google Scholar

39. 

A. Dempster, N. Laird and D. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” J. R. Stat. Soc. Ser. B, 39 1 –22 (1977). https://doi.org/10.1111/j.2517-6161.1977.tb01600.x JSTBAJ 0035-9246 Google Scholar

40. 

M. Hu, H. Sun and N. Kasdin, “Sequential generalized likelihood ratio test for planet detection with photon-counting mode,” Proc. SPIE, 11117 111171K (2019). https://doi.org/10.1117/12.2528838 PSISDG 0277-786X Google Scholar

41. 

M. Hu et al., “A sequential generalized likelihood ratio test for signal detection from photon counting images,” J. Astron. Telesc. Instrum. Syst., Google Scholar

Biography

Mengya (Mia) Hu is a PhD candidate in the Mechanical and Aerospace Engineering Department at Princeton University. Her research focuses on the image simulation and image processing of space telescope systems with starshades. She graduated from the Department of Thermal Science and Energy Engineering at the University of Science and Technology of China in 2015 and was awarded the highest honor of the university, the “Guo Mo-Ruo Scholarship.”

Anthony Harness received his PhD in astrophysics in 2016 from the University of Colorado Boulder. He is an associate research scholar in the Mechanical and Aerospace Engineering Department at Princeton University. He currently leads the experiments in Princeton validating starshade optical technologies.

He Sun received his BS degree from Peking University in 2014 and his PhD from Princeton University in 2019. He is a postdoctoral researcher in the Department of Computing and Mathematical Sciences (CMS) at California Institute of Technology. His research focuses on adaptive optics and computational imaging, especially their applications in astrophysical and biomedical sciences, such as exoplanet and black hole imaging.

N. Jeremy Kasdin received his PhD in 1991 from Stanford University. He is the assistant dean for engineering at the University of San Francisco and the Eugene Higgins professor of Mechanical and Aerospace Engineering, Emeritus at Princeton University. After being the chief systems engineer for NASA’s Gravity Probe B spacecraft, he joined the Princeton Faculty in 1999, where he researched high-contrast imaging technology for exoplanet imaging. From 2014 to 2016, he was a vice dean of the School of Engineering and Applied Science at Princeton University. He is currently the adjutant scientist for the coronagraph instrument on NASA’s Wide Field Infrared Survey Telescope.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Mengya (Mia) Hu, Anthony Harness, He Sun, and N. Jeremy Kasdin "Exoplanet detection in starshade images," Journal of Astronomical Telescopes, Instruments, and Systems 7(2), 021214 (26 March 2021). https://doi.org/10.1117/1.JATIS.7.2.021214
Received: 29 August 2020; Accepted: 26 February 2021; Published: 26 March 2021
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
KEYWORDS
Planets

Signal detection

Venus

Exoplanetary science

Signal to noise ratio

Point spread functions

Photon counting

Back to Top