This paper presents an automatic method for the defogging process from a single haze image. To recover a foggy image, an accurate depth map is estimated from a multi-level estimation method, which fuses depth maps with different sizes of patches by dark channel prior. Markov random field (MRF) is applied to label the depth level in adjacent region for the compensation of wrong estimated regions. The accurate estimation of scene depth provides good restoration with respect to visibility and contrast but without oversaturating. The algorithm is verified by a handful of foggy and hazy images. Experimental results demonstrate that the defogging method can recover high-quality images through accurate estimation of depth map.
This paper proposes a smart portable device, named the X-Eye, which provides a gesture interface with a small size but a
large display for the application of photo capture and management. The wearable vision system is implemented with
embedded systems and can achieve real-time performance. The hardware of the system includes an asymmetric dualcore
processer with an ARM core and a DSP core. The display device is a pico projector which has a small volume size
but can project large screen size. A triple buffering mechanism is designed for efficient memory management. Software
functions are partitioned and pipelined for effective execution in parallel. The gesture recognition is achieved first by a
color classification which is based on the expectation-maximization algorithm and Gaussian mixture model (GMM). To
improve the performance of the GMM, we devise a LUT (Look Up Table) technique. Fingertips are extracted and
geometrical features of fingertip's shape are matched to recognize user's gesture commands finally.
In order to verify the accuracy of the gesture recognition module, experiments are conducted in eight scenes with
400 test videos including the challenge of colorful background, low illumination, and flickering. The processing speed of
the whole system including the gesture recognition is with the frame rate of 22.9FPS. Experimental results give 99%
recognition rate. The experimental results demonstrate that this small-size large-screen wearable system has effective
gesture interface with real-time performance.
Retinex is an image restoration method that can restore the image's original appearance. The Retinex algorithm utilizes a
Gaussian blur convolution with large kernel size to compute the center/surround information. Then a log-domain
processing between the original image and the center/surround information is performed pixel-wise. The final step of the
Retinex algorithm is to normalize the results of log-domain processing to an appropriate dynamic range.
This paper presents a GPURetinex algorithm, which is a data parallel algorithm devised by parallelizing the Retinex
based on GPGPU/CUDA. The GPURetinex algorithm exploits GPGPU's massively parallel architecture and hierarchical
memory to improve efficiency. The GPURetinex algorithm is a parallel method with hierarchical threads and data
distribution. The GPURetinex algorithm is designed and developed optimized parallel implementation by taking full
advantage of the properties of the GPGPU/CUDA computing.
In our experiments, the GT200 GPU and CUDA 3.0 are employed. The experimental results show that the
GPURetinex can gain 30 times speedup compared with CPU-based implementation on the images with 2048 x 2048
resolution. Our experimental results indicate that using CUDA can achieve acceleration to gain real-time performance.
KEYWORDS: Video surveillance, Video, Surveillance, Video compression, Video processing, Video coding, Cameras, Surveillance systems, Intelligence systems, Telecommunications
A mobile video surveillance system is a video surveillance system adopts mobile clients to visualize surveillance
videos over mobile networks. However, mobile networks and mobile clients have limited computational and network
resources. The system combines moving object detection and video transcoding techniques to help users monitor remote
site through video streaming over 3G communication networks. The moving object detection and tracking can skim off
useful video clips. The communication networking services, comprising video transcoding, short text messaging, and
mobile video streaming, transmit surveillance information into mobile appliances. Moving object detection is achieved
by background subtraction of adaptive Gaussian mixture modeling, and particle filter tracking. A spatial-domain
cascaded transcoder is developed to convert the filtered image sequence of detected objects into 3GPP video streaming
format. Experimental results show that the system can successfully detect all events of moving objects for a complex
surveillance scene, and the transcoder has high PSNR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.