The paper presents the economical design of the Optical Matrix to Vector Multiplier (OMVM) for optical education. This OMVM has a three-layer structure: input optical waveguides, a two-dimensional aperture array and output optical waveguides. It can be manufactured by students in university labs without expensive lithography equipment. The OMVM can be applied for hardware realization of artificial neural networks, integral transform calculators, optical signal processors, etc. The process of assembling, testing and implementation of the OMVMs helps students to improve knowledge and skills in optics, signal processing, artificial intelligence, and mathematics.
The paper presents the mathematical apparatus for precise calculation of the three-dimensional point spread function (3D PSF) of optical systems. The method is based on the Huygens-Fresnel principle: a spherical wave on the threedimensional surface of the exit pupil is considered as result of the superposition of elementary secondary point radiation sources. These point sources emit coherent electromagnetic waves with a spherical wave front. They form a certain distribution of generalized complex amplitudes in three-dimensional space near the focus point. This distribution is used to calculate the intensity distribution in the focus area of the optical system, which is the PSF. The advantage of the proposed technique is direct calculation of the 3D PSF with taking into account wave aberrations and without usage of Fresnel or Fraunhofer approximations. In case of small aperture optical systems the proposed technique coincides with classical theory that specifies the link between a pupil function and PSF via Fourier transform. The differences between precise and approximated techniques for 3D PSF calculation are also discussed.
he paper presents the approximation of the three-dimensional optical transfer function of diffraction-limited non coherent optical systems with axial symmetry. This approximation is based on the analytical expression for identification of three-dimensional spatial cutoff frequencies that specifies the spatial bandwidth in three-dimensional space of spatial harmonics. It does not require numerical integrations. The proposed techniques makes calculation of the three dimensional optical transfer functions accurate, easy and fast. It can be applied for performance evaluation, analysis of spatial resolution, computer simulation of imaging systems etc.
The economical optical sensor for measurements of drone coordinates in three-dimensional space is presented. Now drones use digital cameras to perform accurate taking off, landing or dumping of goods. Stabilized digital cameras with electronics for image compression and transmission via high speed wireless channel make drones expensive, reduce their weight payload and battery charge. The proposed optical sensor guarantees the coordinate measurements with ten centimeter accuracy in a volume with dimensions of several meters. The illumination part of this sensor is installed round a landing pad or a goods delivery pad. It forms a set of low-energy optical beams of definite shapes. Each beam transmits a digital code that characterizes its location relatively the pad. The receiving part of this sensor is a set of miniature photodetector units that are fixed under a drone. The proposed technique of the beam code comparison helps to calculate the drone coordinates relatively the pad. As a result, this sensor closes the gap between the accuracy of the Global Positioning System and the centimeter accuracy necessary for accurate drone taking off or landing without usage of a digital camera. The paper describes the sensor design and the experimental testing of this optical sensor. The advantages and possible applications of this sensor are also discussed.
The paper describes the design of an unconventional biologically inspired image sensor. It contains numerous optical channels similar to facets in a natural compound eye. Each channel has several photodetectors with pre-amplifiers and a microcontroller with a multi-channel analog-to-digital converter. The signals coming from the photodetectors in each channel are amplified, converted into a digital form and processed by a microcontroller. All channels independently perform parallel image processing and image analysis. All microcontrollers are attached to a microcontroller network. They send data through this network only if useful signals are registered. These microcontrollers can be reprogrammed to perform various image processing operations including gradient search, spatial filtration, temporal filtration, signal correlation, neural network simulation and others. This design completely differs from the traditional image sensor architecture, which includes a mega-pixel focal plane array with a sequential signal read-out and a multi-core digital signal processor. The proposed architecture can be considered as a big set of the equal channels - the “smart groups” of several pixels with read-out electronics and a digital microcontroller that can extract only the useful data and send it out. The working prototype of this image sensor has demonstrated ability to measure distribution of speed and direction of optical flow thought its field of view in the very short time. The advantages and possible applications of this sensor are also discussed.
The paper describes the design of a biologically inspired compound eye. It contains numerous optical channels like facets in a natural compound eye. Each facet has an optical system, several photodetectors with pre-amplifiers and a microcontroller with a multi-channel analog-to-digital converter. The signals coming from the photodetectors in each channel are amplified, converted into a digital form and processed by a microcontroller. All microcontrollers are attached to a microcontroller network. These microcontrollers can be reprogrammed to perform various image processing operations including spatial filtration, temporal filtration, correlation calculation, neural network simulation and others. It makes possible investigation and performance evaluation of the compound eye when all its facets perform parallel image processing and image analysis. The working prototype of this compound eye has demonstrated ability to measure distribution of speed and direction of optical flow thought its field of view. The possible improvements and possible applications of this design are also discussed.
KEYWORDS: Spatial resolution, Digital filtering, Imaging systems, Point spread functions, Signal to noise ratio, Staring arrays, Digital imaging, Spatial filters, Surface plasmons, Computing systems
The paper proposes the new criterion of spatial resolution of an imaging system. This criterion considers shape and dimensions of a central peak and side lobes of a point spread function, standard deviation of noise. As a result, it helps to reach the optimal balance between the characteristics of a central peak, side lobes and noise. It differs from the widely known full width at half maximum and Sparrow criterion that mainly consider only characteristics of the central peak. There are discussed the digital filter optimal according to the proposed criterion and the limitations in maximization of spatial resolution of imaging systems.
This paper presents the theory for numerical evaluation of the spatial resolution along an optical axis of an optical microscope in case of oblique illumination. It considers the optical setup with a coherent light source, a microscope condenser, a grating located in a plane with an optical axis and with grating slits perpendicular to this axis and a microscope objective. It is proposed the analytical expression for calculation of the minimal resolvable period of this grating or the corresponded spatial cutoff frequency that characterizes the spatial resolution along the optical axis. It is demonstrated that this spatial cutoff frequency is not proportional to the angle of beam inclination. The proposed theory clearly explains why an optical microscope has the limited spatial resolution along an optical axis and how illumination can maximize this resolution.
The paper describes the approach for identification of the spatial bandwidth of an optical system in the lateral and axial directions. This approach considers application of Huygens–Fresnel principle to obtain the integral for calculation of amplitude distribution near a focal point. The replacement of integration variables leads to identification of the limits of integration in a space of spatial frequencies. These limits are the spatial cutoff frequencies for amplitude distribution. Doubling these values produces the spatial cutoff frequencies for intensity distribution, and it shows the same result as Abbe theory predicts. The proposed approach can be used for mathematical explanation why optical systems have limited spatial resolution and why the spatial harmonics with high frequencies cannot pass through optical systems. The analog of Abbe theory for axial direction is proposed. To identify the spatial bandwidth in axial direction, it has to consider the grating located in the plane with an optical axis and with slits perpendicular to this axis. Then it is possible to follow Abbe theory: to consider the diffraction on this grating, formation of a spatial spectrum, and a grating image with axial orientation in an image space.
Abbe theory considers the image formation process in an optical system (OS) in two stages: obtaining a spatial spectrum in a back focal plane and composing an output magnified image in an image plane. Abbe-Porter experiments are bright illustrations of this theory. This theory can be extended from two-dimensional (2D) case to three-dimensional (3D) one. It has to consider the grating inclined relatively the optical axis round the axis parallel to the grating slits. It makes possible calculations of the minimum resolvable period (MRP) of an OS as the functions of the angle of grating inclination. The paper presents the optical setup for observation of spatial spectrums and magnified images in case of the inclined gratings. It has been demonstrated that this spatial spectrum is in full compliance with the proposed theory. It has been also shown that the MRP exists in case of inclined gratings in full compliance with the theory. This experiment may be considered as one of the very interesting experiments for demonstration, understanding and explanation of the physical process of 3D image formation by an OS. Famous Abbe experiment becomes only the partial case and, of course, the most important case of this experiment with an inclined grating.
The paper describes the approach for identification of the spatial bandwidth of an optical system in the lateral and axial directions. This approach considers application of Huygens-Fresnel principle to obtain the integral for calculation of amplitude distribution near a focal point. The replacement of integration variables leads to identification of the limits of integration in a space of spatial frequencies. These limits are the spatial cutoff frequencies for amplitude distribution. Doubling these values produces the spatial cutoff frequencies for intensity distribution and it shows the same result as Abbe theory predicts. The proposed approach can be used for mathematical explanation why optical systems have limited spatial resolution and why the spatial harmonics with high frequencies can not pass through an optical system.
The interesting experiments for investigation of image formation in optical microscopes have been done by E. Abbe, A. Porter and L. Mandelshtam. These experiments have become the classical ones and they are widely used for explanation of Fourier optics. The principal disadvantage of them is difference in optical schemes for observation of object images and their spatial spectrums. The proposed optical setup makes possible demonstration of two stages of image formation – obtaining a spatial spectrum and composing a magnified object image – together in one plane. This setup contains two imaging channels separated by a beam splitter after a microscope objective. The first one forms a magnified object image, the second one – an image of a spatial spectrum. These images may be observed on a screen, via eyepieces or using image sensors. Any occluding of spectrum zones becomes visible and it leads to the corresponded changes in an object image. This optical setup would be useful for optical education and research.
The paper describes the design procedure that allows identification of the optimal parameters of the light source based on optically connected integrating spheres. This source provides a high dynamic range of output radiance with high uniformity of radiance distribution throughout an output aperture. The procedure deals with relative parameters of apertures of primary and secondary integrating spheres, aperture areas, density of lamps and etc. It makes possible calculation of the set of optimal parameters that guarantees the maximal output radiance with high uniformity of its distribution through an output aperture. The paper demonstrates the application of this procedure in the light source design.
The paper presents the mathematical technique for calculation of three dimensional intensity distribution near a focal point of an optical system in case of partly polarized light. The proposed technique considers a high aperture optical system that focuses a partly polarized parallel beam. The principal idea is based on Huygens-Fresnel principle: a spherical wave at an exit pupil of an optical system is considered as a numerous set of secondary light point sources. Each source emits a partly polarized spherical wave. The polarization orientation of each wave can be calculated using angular pupil coordinates. Modulation of amplitude, phase or polarization can be introduced depending on these pupil coordinates. The total intensity is defined as superposition of complex wave amplitudes taking into account polarization orientation, degree of polarization and orientation of detector aperture. The paper presents the intensity distributions calculated for beams with various types and degrees of polarization.
The paper presents the new approach that includes technique, scheme and instruments for precise calibration of space-borne and airborne visible infrared imaging radiometers (VIIRs). The key component of this technique is the precise uniform light source based on optically-interconnected integrating spheres. The light source contains several (5…11) primary integrating spheres of small diameters which are installed on a secondary integrating sphere of bigger diameter. The initial light sources – halogen lamps or light emitted diodes are installed inside the primary integrating spheres. These spheres are mounted on the secondary integrating sphere. The radiation comes from the primary integrating spheres to the secondary one through diaphragms which diameters can be varied. The secondary integrating sphere has an output aperture where uniform radiance emits. As a result the output radiance can be varied in extremely wide range – up to 800 W/(st·m2) with dynamic range 1 000 000 – without any change of spectral characteristics. Non-uniformity of the radiance distribution throughout the output aperture can be smaller 0.5 % because the secondary integrating sphere is illuminated uniformly and it does not contain lamps inside. The paper discusses the requirements to calibration system, the application of this light source in calibration procedures, metrological aspects of radiometric calibration.
The paper presents the design procedure that makes possible identification of the optimal parameters of the proposed light sources. These precise uniform light sources have a form of several or multiple optically connected integrating spheres. Due to high photometric and metrological characteristics the proposed light sources can be considered as one of the best candidates for VIIR calibration in optical range 0.4 – 2.3 μm. The paper discusses the principal engineering aspects connected with the design of these light sources such as optimal selection of integrating sphere geometry, halogen lamps, materials and etc.
The paper presents the mathematical technique for precise calculation of the three dimensional point spread function (3D PSF) of an optical system. The proposed technique is based on the Huygens-Fresnel principle: a spherical wave at an exit pupil is considered as a numerous set of elementary secondary light sources. They emit spherical coherent electro-magnetic waves. All these waves form a definite distribution of summarized complex amplitudes in a three dimensional space near a focal point. This distribution is used for calculation of the distribution of effective intensity which takes into account inclinations of optical beams. The possible approximations of the 3D PSF are discussed. The results of calculations of 3D PSF using the precise and approximated expressions are compared.
The paper presents the mathematical technique for precise calculation of the three dimensional point spread
function (3D PSF) of a high aperture optical system. The proposed technique is based on Huygens-Fresnel principle: a
spherical wave at an exit pupil is considered as a numerous set of elementary secondary light sources. They emit
spherical coherent electro-magnetic waves. All these waves form a definite distribution of summarized complex
amplitudes in a three dimensional space near a focal point. This distribution is used for calculation of the distribution of
effective intensity which takes into account the influence of inclined optical beams. The comparison analysis of this
approach and the techniques based on multi-dimensional Fourier transforms are discussed.
The paper presents the mathematical technique for calculation of the output radiance of the precise uniform light sources which have a form of several or multiple optically connected integrating spheres. The light source contains several primary integrating spheres of small diameters which are installed on a secondary integrating sphere of bigger diameter. The proposed technique takes into account the fluxes multiple times coming from the secondary integrating sphere to the primary ones and in opposite direction. The proposed light sources can be considered as one of the best candidates for calibration of high level optical instruments working in optical range 0.4 - 2.2 mkm including radiometers for remote sensing.
The paper presents the metrological analysis of the light sources which have a form of several optically
connected integrating spheres. This light sources contain several (3 ... 11) primary integrating spheres of small
diameters that are installed on a secondary integrating sphere of bigger diameter. The initial light sources –
halogen lamps or light emitted diodes are installed inside the primary integrating spheres. These spheres are
mounted on the secondary integrating sphere. The radiation comes from the primary integrating spheres to the
secondary one through the diaphragms which diameters can be varied. It makes possible to control the total flux
coming through the diaphragms and to set the required output radiance. The secondary integrating sphere has an
output aperture where uniform radiance emits. The paper discusses the technique for calculation of the precision
of the output radiance according to variations of optical and geometrical parameters of the light source. There are
investigated influences of power supply, reflectance properties, diameters of internal surfaces of the integrating
spheres and diameters of diaphragms between the integrating spheres. The precision of the output radiance is
better than 1.9 % for the non-expensive light sources. For the high-quality light sources it can reach 0.7 %. The
possible range of the output radiance is from 0 to 1 200 W/(st•m2). These facts confirm the sufficient metrological
advantages of the proposed light sources for absolute radiometric measurements. The proposed light sources can
be considered as one of the best candidates for calibration of the modern remote sensing instruments and high
quality imaging systems working in optical range 0.4 – 2.2 μkm.
The paper presents the mathematical description of an optical microscope with a digital camera and an image processing as an analog-digital-analog imaging system. This description considers a channel of the microscope as a sequence of the linear spatial filters of two dimensional signals. The channel contains an optical system as a low frequency analog filter, a digital camera as a low frequency analog filter with spatial and amplitude discretization and noise generation, a digital linear filter which has to amplify the high frequency harmonics and a restoration unit that plays a role of a two dimensional interpolator. This mathematical apparatus is useful for proper selection of a digital camera which guarantees the maximal field of view with absence of image distortions. The terms like optimal, nonsufficient and non-useful (void) linear magnification of a microscope optical system are expanded from a visual microscopy to a digital one. This mathematical description is also applied for selection of a digital filter for focusing and digital focus extension. The modulation transfer function of this filter should match for the spatial spectrum of observed objects in the zone of spatial harmonics that is most sensitive to defocusing. In this case the maximal sensitivity to defocusing with minimization of influence of noise can be reached.
The paper proposes the uniform light sources which have a form of several or multiple optically connected integrating spheres. The principal advantages of these light sources are high photometric and metrological characteristics. As a result they have good perspectives in optical radiometry and calibration of imaging systems and optical instruments. The principal field of their application is calibration of remote sensing instruments and sensitive megapixel cameras. The light source contains several (3 ... 11) primary integrating spheres of small diameters which are installed on a secondary integrating sphere of bigger diameter. The initial light sources - halogen lamps or light emitted diodes are installed inside the primary integrating spheres. These spheres are mounted on the secondary integrating sphere. The radiation comes from the primary integrating spheres to the secondary one through diaphragms which diameters can be varied. The secondary integrating sphere has an output aperture where uniform radiance emits. It is investigated the light source design with an output aperture diameter 0.2 m and 3 or 5 primary integrating spheres. It guarantees the output radiance in range from 0.01 to 1000 W/(st•m2), radiance uniformity bigger 99.5% in an output aperture, non-linearity of an output radiance control - smaller 0.1 %. The paper presents the results of theoretical and experimental research of these light sources including the techniques for radiance calculation and the recommendations for light source design. The proposed light sources can be considered as one of the best candidates for calibration of remote sensing instruments working in optical range 0.4 - 2.2 mkm.
Key words: integrating sphere, light source, calibration, uniformity, radiance, remote sensing, optical instrument.
The paper presents the mathematical technique for calculation of the diffraction depth of focus of an optical
system of a widefield microscope. The proposed technique applies the Rayleigh criterion based on evaluation of the
wave aberration appeared due to defocus in a high aperture optical system. The maximal value of a linear
approximation of the defocus wave aberration is used to define the depth of focus. It is proven that in optical systems
with numerical aperture higher than 0.5 have the diffraction depth of focus 25 - 40 % smaller than the widely known
formula defines. This fact is important for implementation of autofocus and digital focus extension algorithms. The
non-sophisticated formula for calculation of the depth of focus is proposed. The results of experimental measurements
of the depth of focus are presented and discussed.
The paper presents the mathematical technique for calculation of three dimensional intensity distribution near a
focal point of a high aperture optical system in case of quasi monochromatic partly polarized light. This technique is
extension of the vector diffraction theory for high aperture optical systems. It is based on Huygens-Fresnel principle:
spherical wave at an exit pupil is considered as a numerous set of elementary secondary partly polarized light sources.
The total intensity is calculated as superposition of complex wave amplitudes taking into account polarization
orientation, degree of polarization defined by Stokes parameters, orientation of detector aperture and coherence length
of quasi-monochromatic light.
This paper represents comparison analysis of schemes for illumination channel in a light microscope - Kohler illumination, projection and critical illuminations. It is proved that these schemes are particular cases of the proposed common scheme for microscope illumination. It is calculated uniformity of irradiance distribution on sample surface produced by different particular cases of this common scheme with a halogen lamp or a uniform extended light source. It is showed that the high uniformity can be reached not only in Kohler illumination.
The paper discusses several techniques for performance evaluation of passive digital imaging systems. The principal approach of these techniques is a comparison of the output signals from a real imaging system and from the idealized one. The first technique applies the normalized least-square error called fidelity as an absolute measure of the output signal difference. The second technique uses the correlation coefficient that reflects the difference between linear combinations of the output signals as an estimation of performance. The third technique is based on evaluation of the information rate of the output signals.
KEYWORDS: Spatial resolution, Imaging systems, Signal attenuation, Minimum resolvable temperature difference, Interference (communication), Spatial frequencies, Temporal resolution, Analog electronics, Digital signal processing, Thermography
There are several techniques for performance evaluation of an imaging system (IS). The first is the classical one: performance is considered as a characteristics called minimum resolvable temperature difference (MRTD). The second one is fidelity which is a parameter based on the least-square error between output signals of the idealized IS and an investigated one. The least-square error takes into account noise and distortions introduced by high spatial frequencies suppression. The third technique is defined via correlation coefficient between output signals of the idealized IS and a definite one. The paper discusses the application of the mentioned approaches for performance evaluation.
The paper represents the approach for residual non- uniformity evaluation after two-points linear non-uniformity correction. The approach takes into consideration parameters of an imaging system, reference source, non-linearity and noise of multi-element photodetector.
The paper presents an approach for performance evaluation and parametric optimization of imaging system design. This approach is based on calculation and minimization of image distortion. It applies the criterion based on minimization of normalized least-square image error. The proposed mathematical apparatus makes possible evaluation of the performance and calculation of optimal parameters that reduces image distortion caused by spatial filtering and noise. The paper illustrates the application of the proposed technique for performance analysis of a scanning system.
KEYWORDS: Spatial resolution, Imaging systems, Signal to noise ratio, Information technology, Modulation transfer functions, Astatine, Infrared imaging, Interference (communication), Image quality, Minimum resolvable temperature difference
The paper describes an approach to optimization of passive infrared imaging systems based on maximization of the correlation between output signals of an idealized imaging system and a real one. This approach guarantees the optimal balance between temperature and spatial resolutions of the imaging system for any given test object. The paper represents a mathematical apparatus that binds a coefficient of correlation between the output signals with parameters of an imaging system such as focal distance, aperture diameter, dimensions of photosensitive element and etc. This apparatus allows to evaluate the performance and to get a merit function for optimization. Results of optimization and problems of identification of the best relationships between spatial temperature and resolutions are discussed.
The paper presents an approach for performance evaluation and parametric optimization for IR imagin system design. This approach is based on calculations and minimization of image distortion. It applies the criterion like minimization of normalized least-square image error. The proposed mathematical apparatus makes possible evaluation of the performance and calculation of optimal parameters that reduces image distortion caused by spatial filtering and noise. The paper illustrates the application of the posed techniques for scanning system performance analysis and design.
The paper describes an approach to parametric optimization of an IR imaging system. This approach is based on minimization of image distortion of a multi-bar test object. The quality of imaging system is defined by a probability of correct pixel classification. This probability characterizes an error of image binarization. The paper represents the mathematical model that binds the probability of correct pixel classification with parameters of an imaging system such as focal length, aperture diameter, dimensions of photosensitive element, integration time and etc. It allows to get the merit function for parametric optimization and to identify the optimal relationship between spatial resolution and temperature resolution.
KEYWORDS: Temperature metrology, Cameras, CRTs, Computer simulations, Digital signal processing, Visualization, Digital cameras, Radiation effects, Quantum electronics, Optical filters
The cathode ray tube cameras can be used for measurement and visualization of 300 - 2000 degrees Celsius temperature fields that apply in an opto-, micro-, and quantum electronics production. The computer simulation of physical processes in the camera allows us to get necessary information for increasing the temperature accuracy by a digital compensation of the errors. The sequence of the computer simulation and the practical results of temperature measurement are represented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.