To solve the problem that the remote sensing image segmentation speed is slow and the real-time performance is poor, this paper studies the method of remote sensing image segmentation based on Hadoop platform. On the basis of analyzing the structural characteristics of Hadoop cloud platform and its component MapReduce programming, this paper proposes a method of image segmentation based on the combination of OpenCV and Hadoop cloud platform. Firstly, the MapReduce image processing model of Hadoop cloud platform is designed, the input and output of image are customized and the segmentation method of the data file is rewritten. Then the Mean Shift image segmentation algorithm is implemented. Finally, this paper makes a segmentation experiment on remote sensing image, and uses MATLAB to realize the Mean Shift image segmentation algorithm to compare the same image segmentation experiment. The experimental results show that under the premise of ensuring good effect, the segmentation rate of remote sensing image segmentation based on Hadoop cloud Platform has been greatly improved compared with the single MATLAB image segmentation, and there is a great improvement in the effectiveness of image segmentation.
The photoelectric detection system of clutch driven plate is proposed in this paper which used two CCD cameras for
detecting the riveting quality and nonparallelism of the clutch driven plate with the computer system for controlling and
processing the imaging data. It is obtained the overall structure of the photoelectric detection system, given design
parameters of optical system, presented the image processing algorithm. The clutch plate size by detected is
160mm-430mm, the detection resolution is given 0.1 mm. The experiments is proved that the detection rate of riveting
quality and nonparallelism detection can be achieved 100% .The system of structure is reasonably simple, low cost, high
reliability, and it is suitable for the automatic production installation.
KEYWORDS: Stars, Interference (communication), Signal to noise ratio, Signal detection, Signal processing, Optical tracking, Optical design, Sensors, Visualization, Quantum efficiency
Detection sensitivity is an important parameter of star tracker, which can express detection limit of the weakest detected visual star magnitude. Detection sensitivity of APS star tracker is related to APS parameters, optical system aperture and lens permeance rate, SNR, and so on. This paper, the expressions of APS star detected signal and APS noise are given, so signal-noise ratio (SNR) of star tracker can be obtained. And based on the theory inspecting signal from noise and optimal SNR threshold detection principle, the detection sensitivity model can be obtained. The corresponding APS star tracker detection limit is obtain based on APS IBIS5 and general optical system parameter design. When SNR threshold is 8.1 which can obtain 99.9% detection probability, we can calculate that the detection sensitivity is 6.5 visual magnitude.
Transmit array photogrammetric camera can obtain image which is high geometric fidelity and high photogrammetric quality. However, the single chip array CCD image sensor camera can't meet the need of measuring precision and photogrammetric covering area. In order to obtain large numbers of information and extensive photogrammetric covering area, we must increase field of vision angle and improve photogrammetric covering area. And all these objects can be realized by exterior field of vision assembling photogrammetric camera. Two side work must be done before images, which obtained by exterior field of vision assembling photogrammetric camera, be used in photogrammetry. First, all assembling camera focal plane need be converted to a benchmark coordinate focal plane to realize camera digital assembling. Second, images must be re-sampled and processing. Because of coordinate conversion, two images from different assembling cameras can be established function relation, which a pixel of image from a camera is corresponding to a pixel of another image from different camera. But through this conversion, some pixels maybe extrusion together and other pixels separate on an image area. So interpolation direction finding(IDF) is used to obtain these pixels and realize image re-sampling. In this paper, the structure of exterior field of vision assembling photogrammetric camera is analyzed, and the coordinate conversion method of exterior field of vision assembling photogrammetric camera and image gray re-sampling method also can be discussed. All the works are based to data pretreatment of exterior vision assembling photogrammetric camera.
The geometrical Modulation Transfer Function (MTF) of CMOS APS (active pixel sensor) is analyzed in this paper. Advanced APS have been designed and fabricated where different pixel shapes such as square, rectangle and L shape, were placed, because the amplifier circuit and other function circuits inter pixel of APS take up some pixel area. MTF is an important figure of merit in focal plane array imaging sensors. Research on analyzing the MTF for the proper pixel shape is currently in progress for a centroidal configuration of a target position. MTF will give us a more complete understanding of the tradeoffs opposed by the different pixel designs and by the signal processing conditions. Based on image sensor sampling and reconstructing model, the MTF expression of any active pixel shape has been deduced in this paper. According to actual pixel shape, three different active area pixels were analyzed, they were square, rectangle, and L shape, their Fill Factor (FF) is 30%, 44% and 55%, respectively. Results of simulation experiments indicate that different pixel geometrical characteristics contribute significantly to the figures of their MTF. Different geometrical shape of active sensitive area of pixel and different station in pixel would influence MTF figures. The analysis results are important in designing better APS pixel and more important in analyzing imaging system performance of APS subpixel precision system.
Active pixel sensor (APS) star tracker becomes an investigated hotspot because of its technical advantages. And centroid algorithm is a subpixel method proper to star position calculation because of its high accuracy and simplicity. When centroid algorithm is applied on APS star tracker, APS pixel geometrical characteristics might effect on star image position accuracy. Because the amplifier circuit and other function circuits inter pixel of APS take up some pixel area, the Fill Factor is less than 100%. Moreover, the active sensitive area has a certain geometrical shape, such as square, rectangle and L shape. The Fill Factor of pixel influences on star image subdivided locating accuracy when using centioid algorithm. In this paper, we have analyzed all pixel geometrical characteristics influence on the star position accuracy. From simulation experiments, we can conclude that Fill factor and pixel geometric shape influence on star position accuracy. The star locating error increased when Fill Factor decreased, and different geometrical shape of active sensitive area of pixel can make different influence on star location accuracy, the symmetrical sensitive area in x or y axis have symmetrical location error in the same axis.
The gain of a TDI CCD camera is the conversion between the number of electrons recorded by the TDI CCD and the number of digital units (counts) contained in the CCD image[1]. TDI CCD camera has been a main technical approach for meeting the requirements of high-resolution and lightweight of remote sensing equipment. It is useful to know this conversion for evaluating the performance of the TDI CCD camera. In general, a lower gain is better. However, this is only true as long as the total well depth (number of electrons that a pixel can hold) of the pixels can be represented. High gains result in higher digitization noise. System gains are designed to be a compromise between the extremes of high digitization noise and loss of well depth. In this paper, the mathematical theory is given behind the gain calculation on a TDI CCD camera and shows how the mathematics suggests ways to measure the gain accurately according to the Axiom Tech. The gains were computed using the mean-variance method, also known as the method of photon transfer curves. This method uses the effect of quantization on the variance in the measured counts over a uniformly illuminated patch of the detector. This derivation uses the concepts of signal and noise. A linear fit is done of variance vs. mean; the resulting slope is the gain of the TDI CCD. We did the experiments using the Integration Sphere in order to get a flat field effects. We calculated the gain of the four IT-EI-2048 TDI CCD. The results and figures of the four TDI CCD are given.
Small satellites are capable of performing space explore missions that require accurate attitude determination and control. However, low weight, size, power and cost requirements limit the types of attitude sensor of small craft, such as CCD, are not practical for small satellites. CMOS APS is a good substitute for attitude sensors of small craft. Some of the technical advantages of CMOS APS are no blooming, single power, low power consumption, small size and little support circuitry, direct digital output, simple to system design, in particular, radiation-hard characteristic compare with CCD. This paper discusses the application probability of CMOS APS in star tracker for small satellites, further more, a prototype ground-based star camera based on STAR250 CMOS image sensor has been built. In order to extract stars positions coordinates, subpixel accuracy centroiding algorithm has been developed and tested on some ground-based images. Moreover, the camera system star sensitivity and noise model are analyzed, and the system accuracy is been evaluated. Experimental results indicate that a star camera based on CMOS APS is a viable practical attitude sensor appropriate for space small satellites.
Three-Line CCD Camera is a multi-channel solid mapping camera. There are three signal channels in one camera, and twelve channels in three cameras for the three-line CCD camera. Because there is a little difference between different channels, the image grey of different camera channels is a little difference for the same region. Those differences include in two aspect: the difference between the cameras each other and the difference between different channels of the same camera. So they make a mistake when the images of different channels are matched. In the paper, the camera response curves have been described through ground radiation calibration to the multi-channel camera, furthermore, the channels calibration coefficients have been gotten based on making a certain the output basis channel. So the differences between different channels are reduced through radiation calibrating each channel, and the outputs of all channels are almost same, the accuracy of matching image is improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.