Using an infrared image sequence, how can one make the inner structure of a sample more visible without human supervision nor understanding of the context? This task is well known as a challenging task. One of the reasons is due to the great number of external events and factors that can influence the acquisition. This paper introduces a solution to this question. The sequence of infrared images is processed using the monogenic signal theory in order to extract the phase congruency. The Fourier Transform must respect the Hermitian property and it does thank to the Hilbert Transform in the 1D case, however this property is not respected in 2D. It does thanks to some approximation made in the analytic signal. The monogenic signal theory consists in reprocessing the Fourier Transform by replacing the Hilbert Transform by a Riesz Transform in order to maintain the Hermitian symmetry. In other words the phase congruence can be described as a feature detection approach. Using the assumption that the symmetry, or asymmetry of the phase does represent the similarity of the features at one scale, then the phase congruency represents how similar the phase values are at different scales. The proposed approach is invariant to image contrast which makes it suitable for applications. It can also give valuable results even with very noisy sequences. The proposed approach has been evaluated by using referenced Carbon Fiber Reinforced Plastic sample.
Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility, because they are normally oriented toward particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse, and inefficient execution on multicore processors. We present a novel software architecture for time-constrained machine vision applications that addresses these issues. The architecture is divided into three layers. The platform abstraction layer provides a high-level application programming interface for the rest of the architecture. The messaging layer provides a message-passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of message. The application layer provides a repository for reusable application modules designed for machine vision applications. These modules, which include acquisition, visualization, communication, user interface, and data processing, take advantage of the power of well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, the proposed architecture is applied to a real machine vision application: a jam detector for steel pickling lines.
KEYWORDS: Image processing, Computer architecture, Video processing, Data processing, Data modeling, Human-machine interfaces, Image acquisition, Computer programming, Sensors, Real time image processing
Real-time image and video processing applications require skilled architects, and recent trends in the hardware
platform make the design and implementation of these applications increasingly complex. Many frameworks and
libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing
applications. However, they tend to lack flexibility because they are normally oriented towards particular types
of applications, or they impose specific data processing models such as the pipeline. Other issues include large
memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a
novel software architecture for real-time image and video processing applications which addresses these issues.
The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the
application layer. The platform abstraction layer provides a high level application programming interface for
the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic
publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route
the messages from the publishers to the subscribers interested in a particular type of messages. The application
layer provides a repository for reusable application modules designed for real-time image and video processing
applications. These modules, which include acquisition, visualization, communication, user interface and data
processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP,
or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed
architecture.
Nowadays machine vision applications require skilled users to configure, tune, and maintain. Because such
users are scarce, the robustness and reliability of applications are usually significantly affected. Autonomic
computing offers a set of principles such as self-monitoring, self-regulation, and self-repair which can be used
to partially overcome those problems. Systems which include self-monitoring observe their internal states, and
extract features about them. Systems with self-regulation are capable of regulating their internal parameters
to provide the best quality of service depending on the operational conditions and environment. Finally, self-repairing
systems are able to detect anomalous working behavior and to provide strategies to deal with such
conditions. Machine vision applications are the perfect field to apply autonomic computing techniques. This
type of application has strong constraints on reliability and robustness, especially when working in industrial
environments, and must provide accurate results even under changing conditions such as luminance, or noise.
In order to exploit the autonomic approach of a machine vision application, we believe the architecture of
the system must be designed using a set of orthogonal modules. In this paper, we describe how autonomic
computing techniques can be applied to machine vision systems, using as an example a real application: 3D
reconstruction in harsh industrial environments based on laser range finding. The application is based on
modules with different responsibilities at three layers: image acquisition and processing (low level), monitoring
(middle level) and supervision (high level). High level modules supervise the execution of low-level modules.
Based on the information gathered by mid-level modules, they regulate low-level modules in order to optimize
the global quality of service, and tune the module parameters based on operational conditions and on the
environment. Regulation actions involve modifying the laser extraction method to adapt to changing conditions
in the environment.
Advances in the image processing field have brought new methods which are able to perform complex tasks robustly.
However, in order to meet constraints on functionality and reliability, imaging application developers often
design complex algorithms with many parameters which must be finely tuned for each particular environment.
The best approach for tuning these algorithms is to use an automatic training method, but the computational
cost of this kind of training method is prohibitive, making it inviable even in powerful machines. The same
problem arises when designing testing procedures. This work presents methods to train and test complex image
processing algorithms in parallel execution environments. The approach proposed in this work is to use existing
resources in offices or laboratories, rather than expensive clusters. These resources are typically non-dedicated,
heterogeneous and unreliable. The proposed methods have been designed to deal with all these issues. Two methods are proposed: intelligent training based on genetic algorithms and PVM, and a full factorial design based on grid computing which can be used for training or testing. These methods are capable of harnessing the available computational power resources, giving more work to more powerful machines, while taking its unreliable nature into account. Both methods have been tested using real applications.
Infrared imaging is based on the measurement of radiation of an object and its conversion to temperature. A vital parameter of the conversion procedure is emissivity, which defines the capability of a material to absorb and radiate energy. For most applications, emissivity is assumed to be constant. In applications measuring the temperature of objects with high emissivity, this is not problematic, as slight variations in the chosen emissivity value cause only minor changes in the resulting surface temperatures. However, when emissivities are low, as in steel strips, considering emissivity as a constant can lead to significant errors in temperature measurement. To overcome problems generated by variations in emissivity, one solution is to measure temperature where the steel strip forms a wedge, acting as a cavity. In the deepest part of the wedge, emissivity is sufficiently close to one. This work presents a real time image processing system to acquire infrared line scans for steel strips using the wedge method. The proposed system confronts two challenges: extracting infrared line scans in real time from the deepest part of the wedge in rectangular infrared images, and translating pixels from the line scan into real-world units.
Flatness is a major geometrical feature of rolled products specified by both production and quality needs. Real-time inspection of flatness is the basis of automatic flatness control. Industrial facilities where rolled products are manufactured have adverse environments that affect artificial vision systems. We present a low-cost flatness inspection system based on optical triangulation by means of a laser stripe emitter and a CMOS matrix camera, designed to be part of an online flatness control system. An accurate and robust method to extract a laser stripe in adverse conditions over rough surfaces is proposed and designed to be applied in real time. Laser extraction relies on a local and a global search. The global search is based on an adjustment of curve segments based on a split-and-merge technique. A real-time recording method of the input data of the flatness inspection system is proposed. It stores information about manufacturing conditions for an offline tuning of the laser stripe extraction method using real data. Flatness measurements carried out over steel strips are evaluated quantitatively and qualitatively. Moreover, the real-time performance of the proposed system is analyzed.
For many image processing applications, edge detection is a very important task that needs to be assessed, since the success or failure of these applications depends on the performance of this task. Assessment of edge detection is largely subjective; however, current trends in the image processing community are moving toward objective assessment. In recent years, many different methods have been proposed to assess edge detection, although no agreement has been reached as the proper method, since previous comparisons have produced contrasting results. A comparison of assessment methods using an objective approach is presented. Methods are compared by analyzing the results of an optimization procedure using genetic algorithms and the methods as fitness. The comparison is based on the premise that better assessment methods will lead the optimization procedure to produce better results. A cross-validation is carried out to compare the results obtained using one assessment method with others. Conclusions provide recommendations for authors interested in assessing edge detection algorithms.
Image segmentation is one of the most important tasks of image processing, as it provides information used to interpret and analyze image contents. The tuning of the parameters of the segmentation method can be considered an optimization problem by defining an objective function based on the similarity of the segmented image and the ground truth. The problem becomes harder to solve when the ground truth is known only under uncertainty. A solution is proposed for the design and the automatic tuning of a real-time segmentation method for infrared images where the ground truth is uncertain. The proposed solution consists of three steps: the proposal of a segmentation method adapted for the considered images, the definition of an objective function that takes the uncertainty of the ground truth into account, and the automatic tuning of the segmentation method by means of genetic algorithms.
Image filtering is a very important task in any image-processing system, since the output of the filtering constitutes the primary input to high-level vision, which then utilizes domain-specific knowledge to interpret and analyze the image contents. We propose a new approach to filtering the noise of a stream of thermographic line scans. The filter is designed to be applied in real time and is divided in two components: an intrascan filter and an interscan filter. The intrascan filter is based on spatial overlapping which occurs when using high acquisition rates. The interscan filter is based on edge detection and is able to adapt the way it produces the filtered output based on the detected edges. A procedure to automatically tune the parameters of the interscan filter is described and applied. The results of the proposed filters are compared to those obtained using the average and the median filter. In addition, the real-time performance of the proposed filters is analyzed by measuring the time taken by the tasks involved in their application.
Infrared imaging is based on the measurement of the radiation of an object and its conversion to temperature. A very
important parameter of the conversion procedure is emissivity, which defines the capability of a material to absorb and
radiate energy. For most applications, emissivity is assumed to be constant. However, when infrared images are taken
from steel strips in an industrial environment, where the measurement is influenced by thermal reflections of surrounding
objects, the consideration of emissivity as a constant can lead to large errors in temperature measurement. To overcome
problems generated by variations in emissivity, one solution is to measure temperature where the steel strip forms a
wedge, acting as a cavity. In the deepest part of the wedge, emissivity is 1, making the emissivity problems disappear.
This work presents a real-time image processing system to acquire infrared line scans for steel strips using the wedge
method. The proposed system deals with two main problems: infrared line scans must be extracted in real-time from the
deepest part of the wedge in the rectangular infrared images, and pixels belonging to the line scan must be translated to
real-world units.
Flatness inspection is a fundamental issue in the quality control performed during the manufacturing of steel strips. The quality requirements for such products have been increasing for the last years, and nowadays new flatness measurement techniques are needed to fullfill this requirements. This paper introduces the concept of flatness and the most common flatness metrics used in the industry. Our work focuses on the application of the well-known laser ranging techniques in the design, construction and test of a flatness measurement and inspection system capable of aquiring the true shape of a steel strip in real time and calculating its flatness indexes so they can be used in the quality inspection of the steel products.
For testing the system, a steel strip simulator has been constructed that allows us to generate any possible flatness defect with a known magnitude and measure it with our flatness measurement system. The agreement between the real magnitude of the defects and the measured ones is better than 4.5 I-Units in the range of 0 to 100 I-Units. The system prototype has been installed in the conditioning line of Aceralia (steel manufacturer) to test the proposed solution under real industrial conditions.
This paper presents a real-time image acquisition and segmentation system. The system involves three main processes: acquisition, filtering and segmentation. Image acquisition is performed in the steel industry, where thermographic linear images are captured from strips (10 Km long and 1 meter wide) at a temperature between 100°C and 200°C while they are moving forward along a track. During the acquisition process a relationship between each pixel in the linear image and real-world units is established using a theoretical model whose parameters have been adjusted after a calibration process. After the acquisition, linear images are spatially filtered to reduce noise, and online with these processes (acquisition and filtering), segmentation is applied to the linear images to divide them into homogeneous temperature zones. Two different segmentation methods are evaluated: region merging and edge detection. To compare the segmentation algorithms an empiric segmentation evaluation method is defined. The segmentation evaluation method lies in comparing the results obtained from the algorithm with the theoretical segmentation defined by a group of experts. The evaluation method will determine the best segmentation algorithm, the optimal parameter and the effectiveness obtained using a test set
This work presents an automated shape inspection system for 2D objects with variable luminance. The system installed in the steel industry, captures linear images of plates at high temperature (700 - 1200 degree(s)C) while they are moving on a roll path. The main objective of the system is to capture the shape of the head and tail of the plates. These shapes are used to optimize the rolling parameters of the plate mill in order to minimize waste. The radiation generated by the plates in the visible and infrared zones of the spectrum (largely dependent on their temperature) is directly captured by the linear camera of the system with no additional artificial illumination. While most of the research work has been focused on obtaining the optimal illumination for the objects inspected, this work deals with the particular case of objects which irradiate their own light. This system automatically adapts itself to acquire images of plates with different levels of luminance using a mechanisms that calculates the proper exposure time to acquire each image. The mechanism integrates two basic actions: a feedforward control and an adaptive feedback control loop.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.