Neuro-Inspired Vision approach, based on models from biology, allows to reduce the computational complexity. One of these models - The Hmax model - shows that the recognition of an object in the visual cortex mobilizes V1, V2 and V4 areas. From the computational point of view, V1 corresponds to the area of the directional filters (for example Sobel filters, Gabor filters or wavelet filters). This information is then processed in the area V2 in order to obtain local maxima. This new information is then sent to an artificial neural network. This neural processing module corresponds to area V4 of the visual cortex and is intended to categorize objects present in the scene. In order to realize autonomous vision systems (consumption of a few milliwatts) with such treatments inside, we studied and realized in 0.35μm CMOS technology prototypes of two image sensors in order to achieve the V1 and V2 processing of Hmax model.
Today, solid state image sensors are used in many applications like in mobile phones, video surveillance
systems, embedded medical imaging and industrial vision systems. These image sensors require the integration
in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the
constraints related to the quality of acquired images, speed and performance of embedded processing, as well
as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful
information in the scene directly. For example, edge detection step followed by a local maxima extraction will
facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an
intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like
local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64
pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The
architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of
an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions),
in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is
40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions
of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in
less then one ms.
Today's digital image sensors are used as passive photon integrators and image processing is essentially performed
by digital processors separated from the image sensing part. This approach imposes to the processing part to
deal with already grabbed pictures with possible unadjusted exposition parameters. This paper presents a fast
self-adaptable preprocessing architecture with fast feedbacks on the sensing level. These feedbacks are controlled
by digital processing in order to modify the sensor parameters during exposure time. Exposition and processing
parameters are tuned in real life to fit with applications requirement depending on scene parameters. Considering
emerging integration technologies such as 3D stacking, this paper presents an innovative way of designing smart
vision sensors, integrating feedback control and opening new approaches for machine vision architectures.
Today, intelligent image sensors require the integration in the focal plane (or near the focal plane) of complex
algorithms for image processing. Such devices must meet the constraints related to the quality of acquired
images, speed and performance of embedded processing, as well as low power consumption. To achieve these
objectives, analog pre-processing are essential, on the one hand, to improve the quality of the images making
them usable whatever the light conditions, and secondly, to detect regions of interest (ROIs) to limit the amount
of pixels to be transmitted to a digital processor performing the high-level processing such as feature extraction
for pattern recognition. To show that it is possible to implement analog pre-processing in the focal plane, we
have designed and implemented in 130nm CMOS technology, a test circuit with groups of 4, 16 and 144 pixels,
each incorporating analog average calculations.
Imaging systems are progressing in both accuracy and robustness, and their use in precision agriculture is increasing accordingly. One application of imaging systems is to understand and control the centrifugal fertilizing spreading process. Predicting the spreading pattern on the ground relies on an estimation of the trajectories and velocities of ejected granules. The algorithms proposed to date have shown low accuracy, with an error rate of a few pixels. But a more accurate estimation of the motion of the granules can be achieved. Our new two-step cross-correlation-based algorithm is based on the technique used in particle image velocimetry (PIV), which has yielded highly accurate results in the field of fluid mechanics. In order to characterize and evaluate our new algorithm, we develop a simulator for fertilizer granule images that obtained a high correlation with the real fertilizer images. The results of our tests show a deviation of <0.2 pixels for 90% of estimated velocities. This subpixel accuracy allows for use of a smaller camera sensor, which decreases the acquisition and processing time and also lowers the cost. These advantages make it more feasible to install this system on existing centrifugal spreaders for real-time control and adjustment.
We present a detailed experimental study of circle-marker detection algorithms based on color information. Three color spaces-HSV (hue, saturation, value), normalized color space rgb, and the gray-world normalization (NG) color space-are of main concern. We compare the algorithms based on these color spaces and combine them to use the color information simultaneously with shape-based marker detection techniques, and the Hough transformation, to achieve a new, robust, color-based circle-marker detection algorithm that gives higher accuracy. Experimental results show that the proposed algorithm is good enough to detect a circle marker in a complex background with various illuminations.
In the context of fertilizer supply reduction, the understanding of the whole centrifugal spreading process became
essential. Since few years we focused our research on the determination by image processing of the ejection conditions
of flight of the granules, that is the trajectories and ejection angles, used as input data for ballistic flight to predict the
fertilizer repartition on the ground. Due to relative high speed of the fertilizer granules (around 40 m.s-1), the previous
parameters were evaluated using a specific high speed imaging system and image processing based on motion estimation
method using Markov Random Fields method (MRFs). Even if the results were good (90% of correct trajectories), this
method needs an invariance of luminance between two successive images and a good initialization of the motion, quite
difficult to reach with the previous imaging device. In this paper we describe some improvements of the image
acquisition system (illumination management) and we tested image processings using Gabor filters and Block Matching
technique. The results obtained on synthesis images are satisfying but the specific fertilizer motion and behaviour needs
other improvements of these two previous methods to give accurate results in terms of speed and direction of the
granules. A comparison with the MRFs method is also currently investigating to propose a final reliable image
processing technique to be adapted for 3D estimation.
KEYWORDS: Image processing, Retina, Analog electronics, Image acquisition, Sensors, High speed imaging, Photonics, Current controlled current source, Lithium, Very large scale integration
A high speed Analog VLSI Image acquisition and
pre-processing system is described in this paper. A 64×64 pixel retina is used to extract the magnitude and direction of spatial gradients from images. So, the sensor implements some lowlevel
image processing in a massively parallel strategy in each pixel of the sensor. Spatial gradients, various convolutions as Sobel filter or Laplacian are described and implemented on the circuit. The retina implements in a massively parallel way, at
pixel level, some various treatments based on a four-quadrants multipliers architecture. Each pixel includes a photodiode, an amplifier, two storage capacitors and an analog arithmetic unit.
A maximal output frame rate of about 10 000 frames per second with only image acquisition and 2000 to 5000 frames per second with image processing is achieved in a 0.35 μm standard
CMOS process. The retina provides address-event coded output on three asynchronous buses, one output is dedicated to the gradient and both other to the pixel values. A prototype based
on this principle, has been designed. Simulation results from Mentor GraphicsTMsoftware and AustriaMicrosystem Design kit are presented.
The management of mineral fertilization using centrifugal spreaders calls for the development of spread pattern characterization devices to improve the quality of fertilizer spreading. In order to predict spread pattern deposition using a ballistic flight model, several parameters need to be determined, in particular, the velocity of the granules when they leave the spinning disc. We demonstrate that a motion-blurred image acquired in the vicinity of the disc by a low-cost imaging system can provide the three-dimensional components of the outlet velocity of the particles. A binary image is first obtained using a recursive linear filter. Then an original method based on the Hough transform is developed to identify the particle trajectories and to measure their horizontal outlet angles, not only in the case of horizontal motion but also in the case of three-dimensional motion. The method combines a geometric approach and mechanical knowledge derived from spreading analysis. The outlet velocities are deduced from outlet angle measurements using kinematic relationships. Experimental results provide preliminary validations of the technique.
Nowadays, high-speed imaging offers high investigation possibilities for a wide variety of applications such as motion
study, manufacturing developments. Moreover, due to the electronic progresses, real-time processing can be
implemented in the high-speed acquisition systems. Important information can be extracted in real-time from the image
and then be used for on-line controls. Therefore we have developed a high-speed smart camera with high-speed CMOS
sensor, typically 500 fps with a 1.3 Mega-pixels resolution. Different specific processing have been implemented inside
an embedded FPGA according to the high-speed data-flow. The processing are mainly dedicated to feature extraction
such as edge detection, or image analysis, and finally markers extraction and profilometry. In any case, the data
processing allows to reduce the large data flow (6.55 Gbps) and to propose a transfer on a simple serial output link as
USB 2.0. This paper presents the high-speed smart camera and focuses two processing implementations: the marker
extraction and the related profilometry measurement. In the marker extraction mode, the center of mass is determined for
each marker by a combination of image filtering. Only the position of the center is transferred via the USB 2.0 link. For
profilometry measurements, a simplify algorithm has been implemented at low-cost in term of hardware resources. The
positions of the markers or the different object's profiles can be determined in real-time at 500 fps with full resolution
image. A higher image rate can be reached with a lower resolution (i.e. 500 000 profiles for a single row image).
The advent of camera phones marks a new phase in embedded camera sales. By late 2009, the total number of camera
phones will exceed that of both conventional and digital cameras shipped since the invention of photography. Use in mobile
phones of applications like visiophony, matrix code readers and biometrics requires a high degree of component flexibility
that image processors (IPs) have not, to date, been able to provide. For all these reasons, programmable processor solutions
have become essential. This paper presents several techniques geared to speeding up image processors. It demonstrates
that a gain of twice is possible for the complete image acquisition chain and the enhancement pipeline downstream of the
video sensor. Such results confirm the potential of these computing systems for supporting future applications.
The management of mineral fertilisation using centrifugal spreaders requires the development of spread pattern
characterisation devices to improve the quality of fertiliser spreading. In order to predict the spread pattern deposition
using a ballistic flight model, several parameters need to be determined and especially the velocity of the granules when
they leave the spinning disc. This paper demonstrates that a motion blurred image acquired in the vicinity of the disc
with a low cost imaging system can provide the three dimensional components of the outlet velocity of the particles. A
binary image is first obtained using a recursive linear filter. Then an original method based on the Hough transform is
developed to identify the particle trajectories and to measure their horizontal outlet angles, not only in the case of
horizontal motion but also in the case of three dimensional motion. The method combines a geometric approach and
mechanical knowledge derived from spreading analysis. The outlet velocities are deduced from the outlet angle
measurements using kinematic relationships.
KEYWORDS: Digital signal processing, Field programmable gate arrays, Image processing, Algorithm development, Detection and tracking algorithms, Video, Multimedia, Signal processing, Data modeling, Databases
Modern field programmable gate array (FPGA) chips, with their large memory capacity and reconfigurability potential, are opening new frontiers in rapid prototyping of embedded systems. With the advent of high-density FPGAs, it is now possible to implement a high-performance very long instruction word (VLIW) processor core in an FPGA. This paper describes research results about enabling the DSP TMS320 C6201 model for real-time image processing applications by exploiting FPGA technology. We present a modular DSP C6201 VHDL model with a variable instruction set. We call this new development a minimum mandatory modules (M3) approach. Our goals are to keep the flexibility of DSP in order to shorten the development cycle and to use the totality of the powerful FPGA resources in order to increase real-time performance. Some common algorithms of image processing and a face tracking in video sequences application were created and validated on an FPGA VirtexII-2000 multimedia board using the development cycle proposed. Our results demonstrate that an algorithm can easily be, in an optimal manner, specified and then automatically converted to VHDL language and implemented on an FPGA device with system-level software. This makes our approach suitable for developing co-design environments. Our approach applies some criteria for co-design tools: flexibility, modularity, performance, and reusability. In this paper, the target VLIW processor is the DSP TMS320C6x. Nonetheless, our design cycle can be generalized to other DSP processors.
KEYWORDS: Digital signal processing, Field programmable gate arrays, Image processing, Algorithm development, Detection and tracking algorithms, Multimedia, Embedded systems, Rapid manufacturing, Signal processing, Software development
Recent FPGA chips, with their large capacity memory and reconfigurability potential, have opened new frontiers for
rapid prototyping of embedded systems. With the advent of high density FPGAs it is now feasible to implement a
high-performance VLIW processor core in an FPGA. We describe research results of enabling the DSP TMS320 C6201
model for real-time image processing applications, by exploiting FPGA technology. The goals are, firstly, to keep the
flexibility of DSP in order to shorten the development cycle, and secondly, to use powerful available resources on FPGA
to a maximum in order to increase real-time performance. We present a modular DSP C6201 VHDL model which
contains only the bare minimum number of instruction sets, or modules, necessary for each target application. This
allows an optimal implementation on the FPGA. Some common algorithms of image processing were created and
validated on an FPGA VirtexII-2000 multimedia board using the proposed application development cycle. Our results
demonstrate that an algorithm can easily be, in an optimal manner, specified and then automatically converted to VHDL
language and implemented on an FPGA device with system level software.
In Europe, centrifugal spreading is a widely used method for agricultural soil fertilization. In this broadcasting method, fertilizer particles fall onto a spinning disk, are accelerated by a vane, and afterward are ejected into the field. To predict and control the spread pattern, a low-cost, embeddable technology adapted to farm implements must be developed. We focus on obtaining the velocity and the direction of fertilizer granules when they begin their flight by means of a simple imaging system. We first show that the outlet angle of the vane is a bounded value and that its measurement provides the outlet velocity of the particle. Consequently, a simple camera unit is used in the vicinity of the spinning disk to acquire digital images on which trajectory streaks are recorded. Information is extracted using the Hough transform, which is specifically optimized to analyze these streaks and to measure the motion of the particles. The optimization takes into account prior mechanical knowledge and tackles the problem of Hough space quantization. The method is assessed on various simulated images and is used on real spreading images to characterize fertilizer particle trajectories.
We present a classification work performed on industrial parts using artificial vision, a support vector machine (SVM), boosting, and a combination of classifiers. The object to be controlled is a coated heater used in television sets. Our project consists of detecting anomalies under manufacturer production, as well as in classifying the anomalies among 20 listed categories. Manufacturer specifications require a minimum of ten inspections per second without a decrease in the quality of the produced parts. This problem is addressed by using a classification system relying on real-time machine vision. To fulfill both real-time and quality constraints, three classification algorithms and a tree-based classification method are compared. The first one, hyperrectangle based, proves to be well adapted for real-time constraints. The second one is based on the Adaboost algorithm, and the third one, based on SVM, has a better power of generalization. Finally, a decision tree allowing improving classification performances is presented.
High-speed video cameras are powerful tools for investigating for instance the dynamics of fluids or the movements of mechanical parts in manufacturing processes. In the past years, the use of CMOS sensors instead of CCDs have made possible the development of high-speed video cameras offering digital outputs, readout flexibility and lower manufacturing costs. In this field, we designed a new fast CMOS camera with a 1280×1024 pixels resolution at 500 fps. In order to transmit from the camera only useful information from the fast images, we studied some specific algorithms like edge detection, wavelet analysis, image compression and object tracking. These image processing algorithms have been implemented into a FPGA embedded inside the camera. This FPGA technology allows us to process fast images in real time.
We present some development results on a system that performs mosaicking of panoramic faces. Our objective is to study the feasibility of panoramic face construction in real time. To do so, we built a simple acquisition system composed of five standard cameras, which together can take simultaneously five views of a face at different angles. Then, we chose an easily hardware-achievable algorithm, consisting of successive linear transformations, in order to compose a panoramic face from these five views. The method has been tested on a large number of faces. In order to validate our system, we also conducted a preliminary study on panoramic face recognition, based on the principal-component method. Experimental results show the feasibility and viability of our system. This allows us to envisage later a hardware implementation. We also are considering using our system for other applications, such as human expression categorization and fast 3-D face reconstruction.
High-speed video cameras are powerful tools for investigating, for instance, fluid dynamics or the movements of mechanical parts in manufacturing processes. In the past 5 years the use of CMOS sensors instead of CCDs has facilited the development of high-speed video cameras offering digital outputs, readout flexibility, and lower manufacturing costs. Still the huge data flow provided by the sensor cannot be easily transferred or processed and thus must generally be stored temporarily in fast local RAM. Since this RAM is size limited, the recording time in the camera is only a few seconds long. We tried to develop an alternative solution that would allow continuous recording. We developed a real-time image compression in order to reduce the data flow. We tested three algorithms: run-length encoding, block coding, and compression using wavelets. These compression algorithms have been implemented into a FPGA Virtex II-1000 and allow real-time compression factors between 5 and 10 with a PSNR greater than 35dB. This compression factor allowed us to link a new high-speed CMOS video camera with a PC using a single USB2 connection. The full flow of 500 fps in 1280x1024 format is transferred to the computer in real-time.
Generally, medical Gamma Camera are based on the Anger principle. These cameras use a scintillator block coupled to a bulky array of photomultiplier tube (PMT). To simplify this, we designed a new integrated CMOS image sensor in order to replace bulky PMT photodetetors. We studied several photodiodes sensors including current mirror amplifiers. These photodiodes have been fabricated using a CMOS 0.6 micrometers process from Austria Mikro Systeme (AMS). Each sensor pixel in the array occupies respectively, 1mm x 1mm area, 0.5mm x 0.5mm area and 0.2mm 0.2mm area with fill factor 98 % and total chip area is 2 square millimeters. The sensor pixels show a logarithmic response in illumination and are capable of detecting very low green light emitting diode (less than 0.5 lux) . These results allow to use our sensor in new Gamma Camera solid-state concept.
Computer assisted-vision plays a greater and greater part in our society, in various fields such as people and goods safety, industrial production, telecommunications, robotic... However, technical developments are still timid and slowed down by various factors linked to the sensors cost, to the systems lack of flexibility, to the difficulty of developing rapidly complex and robust applications, and to the lack of interaction among these systems themselves, or with their environment. This paper describes the ICAM(Intellignent CAMera) project, a smart camera with real-time video processing capabilities. This camera associates a sensor with massively parallel outputs and a SIMD processors network to achieve very high speed processing. The paper presents the first modelisation of this device and first results.
This paper describes the main principles of a vision sensor dedicated to the detecting and tracking faces in video sequences. For this purpose, a current mode CMOS active sensor has been designed using an array of pixels that are amplified by using current mirrors of column amplifier. This circuit is simulated using Mentor Graphics software with parameters of a 0.6 μm CMOS process. The circuit design is added with a sequential control unit which purpose is to realise capture of subwindows at any location and any size in the whole image.
This paper presents a classification work performed on industrial parts using artificial vision, SVM and a combination of classifiers. Prior to this study, defect detection was performed by human inspectors. Unfortunately, the time involved in the inspection procedure was far too long and the misclassification rate too high. Our project consists in detecting anomalies under manufacturer production and cost constraints as well as in classifying the anomalies among twenty listed categories. Manufacturer’s specifications require a minimum of ten inspections per second without a decrease in the quality of the produced parts. This problem can be solved with a classification system relying on a real-time machine vision. To fulfill both real time and quality constraints, two classification algorithms and a tree based classification method were compared. The first one, Hyperrectangle based, has proved to be well adapted for real-time constraints. The second one, based on Support Vector Machine (SVM), is more robust, more complex and more greedy regarding the computing time. Finally, naïve rules were defined, to build a decision tree and to combine it with one of the previous classification algorithms.
KEYWORDS: Digital signal processing, Image processing, Signal processing, Embedded systems, Neural networks, Video, Detection and tracking algorithms, Video processing, Facial recognition systems, Distance measurement
In this paper, we present implementations of a pattern recognition algorithm which uses a RBF (Radial Basis Function) neural network. Our aim is to elaborate a quite efficient system which realizes real time faces tracking and identity verification in natural video sequences. Hardware implementations have been realized on an embedded system developed by our laboratory. This system is based on a DSP (Digital Signal Processor) TMS320C6x. The optimization of implementations allow us to obtain a processing speed of 4.8 images (240 x 320 pixels) per second with a correct rate of 95% of faces tracking and identity verification.
In this paper, we propose a method of implementation improvement of the decision rule of the support vector machine, applied to real-time image segmentation. We present very high speed decisions (approximately 10 ns per pixel) which can be useful for detection of anomalies on manufactured parts. We propose an original combination of classifiers allowing fast and robust classification applied to image segmentation. The SVM is used during a first step, pre-processing the training set and thus rejecting any ambiguities. The hyperrectangles-based learning algorithm is applied using the SVM classified training set. We show that the hyperrectangle method imitates the SVM method in terms of performances, for a lower cost of implementation using reconfigurable computing. We review the principles of the two classifiers: the Hyperrectangles-based method and the SVM and we present our combination method applied on image segmentation of an industrial part.
KEYWORDS: Digital signal processing, Image processing, Neural networks, Signal processing, Embedded systems, Detection and tracking algorithms, Video, Data centers, Pattern recognition, Evolutionary algorithms
In this paper, we present implementations of a pattern recognition algorithm which uses a RBF (Radial Basis Function) neural network. Our aim is to elaborate a quite efficient system which realizes real time faces tracking and identity verification in natural video sequences. Hardware implementations have been realized on an embedded system developed by our laboratory. This system is based on a DSP (Digital Signal Processor) TMS320C6x. The optimization of implementations allow us to obtain a processing speed of 4.8 images (240x320 pixels) per second with a correct rate of 95% of faces tracking and identity verification.
This paper describes a system capable of realizing a face detection and tracking in video sequences. In developing this system, we have used a RBF neural network to locate and categorize faces of different dimensions. The face tracker can be applied to a video communication system which allows the users to move freely in front of the camera while communicating. The system works at several stages. At first, we extract useful parameters by a low-pass filtering to compress data and we compose our codebook vectors. Then, the RBF neural network realizes a face detection
and tracking on a specific board.
This paper describes a system of vision in real time, allowing to detect automatically the faces presence, to localize and to follow them in video sequences. We verify also the faces identities. These processes are based by combining technique of image processing and methods of neural networks. The tracking is realized with a strategy of prediction-verification using the dynamic information of the detection. The system has been evaluated quantitatively on 8 video sequences. The robustness of the method has been tested on various lightings images. We present also the analysis of complexity of this algorithm in order to realize an implementation in real time on a FPGA based architecture.
KEYWORDS: Field programmable gate arrays, Image processing, Finite impulse response filters, Data processing, Digital filtering, Optical filters, Image acquisition, Signal processing, Image restoration, Analog electronics
FPGA components are widely used today to perform various algorithms (digital filtering) in real time. The emergence of Dynamically Reconfigurable (DR) FPGAs made it possible to reduce the number of necessary resources to carry out an image processing application (tasks chain). We present in this article an image processing application (image rotation) that exploits the FPGA's dynamic reconfiguration feature. A comparison is undertaken between the dynamic and static reconfiguration by using two criteria, cost and performance criteria. For the sake of testing the validity of our approach in terms of Algorithm and Architecture Adequacy , we realized an AT40K40 based board ARDOISE.
This paper deals with the restoration of the shape of an object observed with a high-resolution infrared imaging device, through atmospheric turbulence. The propagation path is quite long (a few tenth kilometer) and the image is thus disturbed. A sequence of short-exposure images of the interesting object is recorded. We can see that the object shape fluctuates randomly during the sequence, but that its edges remain sharp, thanks to the very short exposure time. A bayesian analysis of the Fourier descriptors associated to the edges shows that the optimal shape is the one corresponding to the mean Fourier descriptors. We thus propose two ways to estimate this shape. The first one consists in matching point-to-point each pair of successive edges in the sequence and take the average position of each point. The second one consists in applying an active contour (a snake) on the images. This contour evolves with the object shape during the sequence. From the set of positions of its nodes, we can calculate quite easily the optimal shape.
This study has been realized to improve industrial machines that allow to analyze planks by detecting their width and too important defects thanks to a computer vision system. These machines are currently piloted by software with the help of PCs. The aim of our work is to realize a hardware card to increase the processing speed.
The aim of our work is to implement a system of motion estimation in image sequences processing on DSP's: fast motion estimation based on Gabor spatio-temporal filters. Our approach consists to calculate optical flow using an energy-based method, named combined filtering which associates the energetic responses of Gabor spatio- temporal filters organized in triads. For this purpose, we applicate a technique developed by the Laboratory LIS in France, inspired from the architecture of Heeger. To reduce the computation time, we present also a parallel implementation of the algorithm on a multi-DSP architecture using SynDEx tool which is a programming environment to generate optimized distributed real-time executives. We show that an acceleration of factor 1.86 has been obtained.
KEYWORDS: Digital signal processing, Facial recognition systems, Image processing, Detection and tracking algorithms, Computer architecture, Edge detection, Fuzzy logic, Image filtering, Data modeling, Data communications
The aim of our work is to implement a system,of automatic face image processing on DSP's: face detection in an image, face recognition and face identification. The first step is to localize the face in an image. Our approach consists to approximate the face oval shape with an ellipse and to compute coordinates of the center of the ellipse. For this purpose, we explore a new version of the Hough transformation: the Fuzzy Generalized Hough transformation. To reduce the computation time, we present also several parallel implementations of the algorithm on a multi-DSP architecture using SynDEx tool which is a programming environment to generate optimized distributed real-time executives. We show that an acceleration of factor 1.7 has been obtained.
This paper describes a complete fast imaging system applied to human movement analysis. The main goal of this system composed of a fast CCD camera associated with a real time calculation board, is to follow and to analyze the movement of a human operator. This system functions to 300 images by second. This has been able to be possible thanks to the utilization of FPGA circuits.
The aim of our work is to realize the implementation of a real-time high-quality image rotation on FPGA's board. The method we used is based on M. Unser's work and consists in applying a B-spline interpolator. The difficulty of this problem is due to the relatively weak integration capacity of FPGAs. To solve this problem we have searched for determining the minimum number of bits to code the filter while keeping a good accuracy of filtering output. In this article, we remind a few definitions about B-spline functions and we present how we use B- spline interpolation for the image rotation problem. Then, we describe the way we calculate probability density function of the output error in order to determine the filter data coding.
This article describes the present development of a complete fast imaging microsystem, which the main aim is to propose, for biological applications and with a low cost, a high speed camera (2400 frames per second) and its data storage system associated. This microsystem uses a multiple parallel outputs CCD image sensor and will be connected to a PC computer compatible.
KEYWORDS: Image processing, Image filtering, Digital filtering, Radiography, Smoothing, Signal to noise ratio, Defect detection, Absorbance, Image segmentation, Linear filtering
We propose in this article a complete acquisition and processing line of weld radiographies, from which must be highlighted cavity shaped defects. The main features of the films are a high optical density, a strong inhomogeneous background in both directions and a weak contrast. We present the study of three trend removal methods (least square fitting, Fourier smoothing and adjustment splines) allowing to improve the signal/noise ratio of the edges pictures. The segmentation is derived from the edges picture by the mean of a constrained watershed algorithm. The constraints are deduced in an automatic way from the shape of the histogram.
This paper describes two complete fast imaging systems using a commercial Charge Coupled Device (CCD). It includes two different storage systems (analogical and digital) and describes a new high speed sensor built as an Application Specific Integrated Circuit (ASIC) in Complementary Metal Oxide Semiconductor (CMOS) 1.2 micrometers technology. The first system has been applied to a biological research.
We present in this paper an image processing system to analyze and to detect some defects on metallic caps in movement with a speed of ten caps per second. We explain in detail the image acquisition method, the choice of the process parameters (in particular an edge detection parameter), the software development compatible with a micro computer (486 33 Mhz), and the working explanation of this system. We propose another solution which is more performant in words of processing time and image resolution. This solution uses an ASIC device that we have developed and which operates the real time edge detection (25 images per second). This work has been performed in relation to an industrial contract.
In the field of the human sciences, a lot of
applications needs to know the kinematic
characteristics of the human movements
Psycology is associating the characteristics
with the control mechanism, sport and
biomechariics are associating them with the
performance of the sportman or of the patient.
So the trainers or the doctors can correct the
gesture of the subject to obtain a better
performance if he knows the motion properties.
Roherton's studies show the children motion
evolution2 . Several investigations methods are
able to measure the human movement
But now most of the studies are based on image
processing. Often the systems are working at the
T.V. standard (50 frame per secund ). they
permit only to study very slow gesture. A human
operator analyses the digitizing sequence of the
film manually giving a very expensive,
especially long and unprecise operation.
On these different grounds many human movement
analysis systems were implemented. They consist
of:
- markers which are fixed to the anatomical
interesting points on the subject in motion,
- Image compression which is the art to coding
picture data. Generally the compression Is
limited to the centroid coordinates calculation
tor each marker.
These systems differ from one other in image
acquisition and markers detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.