Open Access Paper
11 February 2020 Optoelectronic sensor system for recognition of objects and incidents
Andrzej Ligienza, Tomasz Sosnowski, Grzegorz Bieszczad, Jarosław Bareła
Author Affiliations +
Proceedings Volume 11442, Radioelectronic Systems Conference 2019; 114420O (2020) https://doi.org/10.1117/12.2565165
Event: Radioelectronic Systems Conference 2019, 2019, Jachranka, Poland
Abstract
A crucial aspect of assuring safety for public areas is the possibility to effectively supervise them. Variety of monitoring systems are being employed for that purpose, but many of them need to be accommodated by local infrastructure. It is not always possible however, and necessity to monitor them arises occasionally for a limited periods of time. A solution is a system that can be deployed in field and operate in various weather conditions. Thermal cameras can be used during night and low visibility conditions like mist without need for additional illumination. Designed system consisting of such cameras will use internet of things technology for internal communication, eliminating need for extensive networks of cables and speeding up configuration and installation by automating protocols. Primary role of this system is to monitor area of interest for movement, emergencies, inflow and outflow of people or sabotage attempts. The system will use a network of miniature thermal cameras with 80x80 microbolometric sensor. Cameras will be equipped with geolocalization sensors and will be able to operate independently for at least 2 days without external power source. The system will monitor area and detect presence and movement of people and vehicles at certain distances. The system will be supplemented by other sensors like PIR, acoustic or seismic sensors. System will be placed as dispersed mesh configuration, what enables scaling or fast deployment via airplane drops.

1.

INTRODUCTION

Surveillance of public areas is becoming more crucial than ever before for assuring security of contemporary society. Due to extensive increase in population, automating detection of objects and certain incidents even more so. For this reason many areas are secured by custom designed systems of sensors for surveillance and threat detection. Such systems are usually powered and interconnected via local infrastructure, hence it is problematic to implement such systems in places like parks, farm fields, woods, beaches and other areas remote from city infrastructure. Furthermore, necessity to monitor such areas is often periodical for certain events such as concerts, festivals, sport competitions etc. Therefore a system that would be easily deployed, scalable and working on independent power source would be most desirable.

2.

OPTOELECTRONIC SENSOR NETWORK CONCEPT

One of the most critical features of surveillance system is the possibility to work both during day and night. In low visibility conditions like mist or smoke and without any additional illumination as well. All those requirements are satisfied by sensors operating in infrared spectrum, thermal cameras being especially useful. Systems comprising of infrared sensors are capable of localizing, identifying and tracing objects (people primarily) completely passively, ensuring security by monitoring areas in case of certain incidents.

It has been decided that system will use thermal imaging cameras with varying resolutions of the detectors: 80x80, 320x240, 640x480, 1024x768. These cameras will transmit data to the surveillance center either via cable or wirelessly. Additionally such a system could be complemented by PIR infrared sensors (as a passive barrier or motion detector) and low light level cameras.

Basic functions realized by the system:

  • Surveillance of secured areas,

  • Detection of motion in the secured area,

  • Detection of certain incidents like smoke, fire, movement of people, overheating of the equipment and devices,

  • Detection of suspicious behavior or lack of activity (i.e. fainting and loss of consciousness),

  • Detection of sensor’s motion itself (sabotage indication).

By employing Internet of Things (IoT) technology, the system will be easily scalable in relation to the area to be secured, varying in size, terrain and application i.e. monitoring equipment or traffic. Furthermore using IoT enables quick deployment of the system in the given area, that could consist of an extensive amount of sensors in the system’s configuration. Such sensors will automatically configure themselves into dispersed, mesh type network, for which wireless communication is especially desirable to minimize system’s physical complexity. It is planned to apply machine learning algorithms for machine vision [1] data processing and subsequently sensor fusion. It allows to augment obtained information about the secured area increasing its efficiency, as well as to reduce power consumption by employing sensors like PIR or acoustic sensors as triggers for the system when an incident occurs. Using thermal imaging cameras eliminates necessity for additional illumination further reducing energy consumption and its capability to operate in smoke extends its application possibilities i.e. firefighting operations. What is more, infrared region characteristic absorption spectra of various gases might enable detection of dangerous chemicals and gas leaks in the monitored area[2]. Another feature of this technology is remote measurement of radiative properties of observed objects and deriving information about their absolute temperature. It requires additional radiometric calibrations [3], however.

It is assumed that the amount of thermal cameras used in a particular dispersed system configurations is directly dependent on area to be covered, count of monitored objects and type of incidents to be detected. During system’s development a preliminary theory of sensor placement and distribution has been worked out, together with sensors’ construction requirements for data processing, fusion and efficient communication.

The system for automatic recognition of objects and incidents has to be designed with support of the system’s user in mind. Ability to analyze image in such a way to maximize the amount of useful data relayed to the operator and minimize the time required to understand the information coming from the given area of interest. This leads to further reduction of the amount of people required to operate the system and decrease time to properly analyze the situation and react accordingly. It is expected that automatic recognition algorithms will be able to recognize defined objects and events and differentiate between them and the ones outside of the defined set. The diagram of a typical automatic recognition system has been presented on Figure 1.

Figure 1.

Diagram of tasks performed by automatic object identification system.

00473_psisdg11442_114420o_page_2_1.jpg

Objects of interest may be in motion, be partially or fully covered, or camouflaged. Despite these difficulties, the automatic recognition system should be able properly interpret and identify differently defined objects. One of the most difficult problems is classification stage. It is basing on system’s ability to reliably differentiate signature of one specific object from other similar signatures. Usually process of automatic classification, recognition and identification consists of following operations:

  • 1. Sensor’s raw image preprocessing (normalization, nonuniformity correction [4, 5], etc.).

  • 2. Image segmentation, where certain pixels of the image are assigned to a group of pixels comprising the target with a certain level of probability, or eliminating them from the group and classifying them as background.

  • 3. Shape analysis depending on the continuity of pixels comprising the target, contrary to chaotic nature of pixels not being part of the target.

  • 4. Properties measurement: measurement and analysis of target’s basic shape parameters like average intensity value for a given area or texture parameters.

  • 5. Deriving other features like Karhunen – Loéve Transform coefficient or Singular Value Decomposition coefficient.

3.

THERMAL IMAGE PROCESSING

The primary data obtained from infrared sensors organized in a dispersed surveillance network is in a form of a thermal image. During the process of recognition, classification and identification of objects, image enhancement techniques are often used to improve the quality of information and facilitate proper incident inference. Additionally, external factors like temperature of the environment or its humidity [6] might influence detector’s heat exchange rate, changing its working point and deteriorating the image. Such influence needs to be actively compensated. Furthermore thermal image is not natural for humans and often it causes problems with correct interpretation of the thermal images. Their characteristic properties are:

  • Small spatial resolution in comparison to images in visible spectrum,

  • Image deteriorating effects that are a direct result of radiative properties of the observed objects,

  • Image intensity disruptions caused by objects with big temperature difference i.e. very cold and/or very hot objects simultaneously on the observed scene.

Image analysis based detection, recognition and identification of objects and incidents is currently one of the most researched topics. It is intended to grant the machines the human like vision together with resulting abilities. Idea of machine vision involves data processing from the sensors to infer useful information from the observed scene. This type of systems can improve both reaction time and probability of making the most appropriate decision by the operators of the systems. Machine vision can be applied to a wide variety of fields like military and defense, biomedical engineering, medicine and surgeries, public health, autonomous transport, production, robotics, entertainment and security systems.

The whole process of detection, recognition and identification with use of machine vision requires profound knowledge in various science fields, especially with thermal imaging involved. Furthermore, the analysis of thermal images is made more difficult by the fact, that standard image processing methods are ineffective. Basic task of digital image processing systems, is image enhancement for improving information quality for both human perception as well as automatic recognition systems [7].

Image enhancement techniques are required primarily to process the images to be most useful in intended application, in comparison to the original image. Hence every image correction must be preceded by a decision about which feature needs to be emphasized. Extensive variety of problems that can arise during image correction, prevents use of one universal method. Each type of image requires individual approach for image processing method, supported by the experience of the system’s user. Image enhancement algorithms’ tasks are:

  • Image contrast increase,

  • Edge enhancement,

  • Random noise reduction,

  • Shape smoothing,

  • Deterministic noise compensation.

It often happens that aforementioned tasks are counteracting each other. Design process of such system requires extensive experience coupled with a lot of trial and error from its creator. Image quality assessment is subjective. Determination of “good quality” depends heavily on human perception of visual information and type of image analysis algorithms applied further down the image processing pipeline. As image quality enhancement is based more on constructor’s intuition than formalized rules, it is very difficult or even impossible to design an optimal algorithm. Nevertheless, there are general assumptions that allow to separate image quality enhancement methods into three classes:

  • Histogram modification methods,

  • Spatial methods,

  • Frequency methods.

Basic digital image quality improvement operations modify functions of intensity and contrast. Initial neutral transform has been presented on figure 2a. Transform function change influenced by contrast modification has been presented on figure 2b. Contrast increase causes an increase in function’s slope angle, which directly translates to emphasizing differences in intensity levels of individual pixels. The count of intensity levels is finite. Therefore, if the level count is N, contrast increase cuts off at level A. All pixels from original image that reach level A, after transformation will be equalized at level N.

Figure 2.

Image contrast and intensity modification methods.

00473_psisdg11442_114420o_page_4_1.jpg

Increasing intensity of the image corresponds to translating the slope of the function in vertical direction (figure 2c). It can be interpreted as adding certain constant value to the function, equal to the magnitude of slope’s translation. In consequence some intensity levels won’t occur in the resulting image. These levels have been designated with letter B. The character of the transform function used should be selected adequately for intended application. For example it might be desirable to increase contrast only in the middle range of intensity levels. An exemplary application of thermal image normalization has been shown on Figure 3. Using this operation significantly increased contrast of the image.

Figure 3.

Example of applying LUT operation for image normalization: original image (a) image after normalization (b).

00473_psisdg11442_114420o_page_4_2.jpg

Image processing involving nonlinear relation of the image’s amplitude is also commonly used. It is especially significant for thermal image processing, where signal amplitude is nonlinearly related to the temperature of the observed object. This is a consequence of band limited Planck’s law. Another method for nonlinear image correction of a thermal image is histogram equalization, especially popular for cameras used just for observation. Histogram equalization is used to fit such a transform function, so that resulting histogram is as leveled as possible. This emphasizes the details on the image that are obscured by low image contrast. However, this method is not universal and for some images it fails to provide satisfying results. An example of histogram equalization’s application has been shown on Figure 4.

Figure 4.

Histogram equalization example: original image (a), image after equalization (b).

00473_psisdg11442_114420o_page_5_2.jpg

Another class of commonly used methods are spatial filters, that modify pixel value based on values of surrounding pixels – context. Assuming that image is presented as a pixel value matrix, an image modifying function can be described by the formula:

00473_psisdg11442_114420o_page_5_1.jpg

where: L(m,n) – function of input image, L’(m,n) – function of output image, w(i,j) spatial filter coefficient matrix.

Therefore, new intensity value for a given pixel is calculated based on surrounding pixels’ values. It works first by defining a mask. Then the mask moves throughout pixels of the image with a certain step size, usually equal to the distance between pixels. Finally it calculates new value for each from the pixels currently within the mask. This operation is repeated for every pixel in the matrix. Using mask with proper coefficients, it is possible to perform various functions, like low-pass filter, high-pas filter, edge detection or edge enhancement (Figure 5) and many others.

Figure 5.

Spatial filter application performing edge enhancement: original image (a) output image (b).

00473_psisdg11442_114420o_page_5_3.jpg

4.

INFRARED SENSOR SYSTEM

The designed system will be able to passively localize, identify and trace objects to monitor and secure the designated area. A concept system based on miniature thermal cameras with 80x80 pixels detectors has been developed. Cameras will communicate and transmit the data wirelessly to the surveillance center. Additionally it is intended to equip cameras with embedded GPS and electronic compass modules to enable automatic localization and orientation. It is expected that each sensor will be able to operate without external power source for no less than two days. Basic tasks required to be performed by the system were formulated as follows:

  • Surveillance of the specified area,

  • Detection of a human on distance not shorter than 75 m,

  • Detection of a car on a distance not shorter than 150 m,

  • Detection of a HGV on a distance not shorter than 350 m,

  • Motion detection of people, animals and vehicles,

  • Motion detection of sensor itself (sabotage attempts).

It is also consider to include other sensors in the system, like passive infrared sensors (PIR) as a barrier, low light level television (LLL TV) and thermal cameras with high resolution detectors i.e. 640x480 or 320x240. Other non-optical sensors could also be employed, like seismic or acoustic. Using the system as a dispersed, mesh-like network of sensors that would work as internet of things, would make scaling and deployment of the system significantly easier and efficient. The advantage is that the system’s configuration could be automatized and fast installation – for example via airdrop would be enabled. Embedded navigation sensors would localize and determine orientation of each sensor in relation to its absolute position as well as position relative to other sensors. This would better convey information about covered area and generate accurate position data of detected objects in range. Sensors used in the system will have exchangeable, camouflaging enclosures resembling rocks, cans, bricks, tree logs etc. depending on the terrain of operation. i.e. woods, desert, mountains).

The amount and kind of sensors used in a particular system configuration will be heavily dependent on the area to be secured up to a radius of several kilometers. For example a region of 90000 m2 can be covered with at least 4 80x80 thermal cameras with 20° field of view (FOV). Exemplary distributions of four 80x80 cameras to cover this area with different geometries, has been presented on Figure 6.

Figure 6.

Examples of distributing four 80x80 cameras with FOV 20°, with their fields of view(marked red) for areas of surveillance: 300 x 300 m (a), 450 x 200 m (b) and 600 x 150 m (c).

00473_psisdg11442_114420o_page_6_1.jpg

One of the most important parameters determining the effectiveness of the infrared sensors system, are ranges of detection, recognition and identification of the observed objects. Three types of targets have been selected for research simulation. A human (size 1.8 m x 0.6 m), car (size 2.3 m x 2.3 m) and HGV (size 4.0 m x 2.5 m). These parameters are standardized for thermal imaging devices’ evaluation. Results of the simulation has been presented in tables 1, 2 and 3. Additionally, images of various objects positioned at derived detection, recognition and identification distances, have been acquired by cameras designed with calculated parameters. Analysis of recorded thermal images, conducted by a group of designated observers has confirmed validity of the assumed range parameters. Demonstrative result of the acquisition has been designated with a (*) symbol in Table 2 and shown on Figure 7

Table 1.

Detection, recognition and identification ranges for human target, using different thermal cameras.

ParameterCamera 80×80, pixel 17 μm, FOV 20°Camera 320×240, pixel 12 μm, FOV 90°Camera 640×480, pixel 12 μm, FOV 90°
Detection Range125 m120 m240 m
Recognition Range45 m40 m80 m
Identification Range20 m20 m40 m

Table 2.

Detection, recognition and identification ranges for car target, using different thermal cameras.

ParameterCamera 80×80, pixel 17 μm, FOV 20°Camera 320×240, pixel 12 μm, FOV 90°Camera 640×480, pixel 12 μm, FOV 90°
Detection range275 m270 m550 m
Recognition Range95 m90 m180 m
Identification Range50 m *45 m90 m

Table 3.

Detection, recognition and identification ranges for HGV target, using different thermal cameras.

ParameterCamera 80×80, pixel 17 μm, FOV 20°Camera 320×240, pixel 12 μm, FOV 90°Camera 640×480, pixel 12 μm, FOV 90°
Detection range380 m370 m750 m
Recognition Range130 m120 m250 m
Identification Range65 m60 m125 m

Figure 7.

Demonstrative image of car type targets at 50m acquired with visible spectrum camera (a) and respective thermal image acquired with 80x80 resolution and FOV 20° thermal camera.

00473_psisdg11442_114420o_page_7_1.jpg

5.

CONCLUSION

A conceptual optoelectronic system for automatic identification of objects and incidents has been presented in this article. It is assumed that sets of thermal cameras will be main components comprising the system, which will be able to detect threats and objects in low visibility conditions, at night especially. It has been determined and pointed out which methods and algorithms need to be implemented to achieve desired results. Camera parameters (like FOV) have been determined by analyzing possibilities to secure area of no less than 90000 m2. Detection, recognition and identification ranges – which are the most crucial parameters – have been determined for targets like human, car and HGV. Based on calculations and simulations, cameras designed according to these parameters will satisfy range requirements. Furthermore it has been discovered that just four 80x80 resolution and one 640x480 resolution cameras are enough to fully cover area of 90000 m2. Performance of 320x240 resolution camera is very similar to 80x80 resolution camera. Hence use of 320x240 resolution camera is both technically and economically unwarranted.

REFERENCES

[1] 

Strąkowski, R., Pacholski, K., Więcek, B., Olbrycht, R., Wittchen, W., Borecki, M., Estimation of FeO content in the steel slag using infrared imaging and artificial neural network, Measurement: Journal of the International Measurement Confederation, 117 380 –389 (2018). Google Scholar

[2] 

Olbrycht, R., Kałuża, M., Wittchen, W., Borecki, M., Więcek, B., De Mey, G., Kopeć, M., Gas identification and estimation of its concentration in a tube using thermographic camera with diffraction grating, Quantitative InfraRed Thermography Journal, 15 (1), 106 –120 (2018). Google Scholar

[3] 

Hots, N., Investigation of temperature measurement uncertainty components for infrared radiation thermometry, Advances in Intelligent Systems and Computing, 543 556 –566 (2017). Google Scholar

[4] 

Michał Krupiński, Grzegorz Bieszczad, Tomasz Sosnowski, Henryk Madura, Sławomir Gogler, “Nonuniformity correction in microbolometer array with temperature influence compensation,” Metrol. Meas. Syst., XXI (4), 709 –718 (2014). https://doi.org/10.2478/mms-2014-0050 Google Scholar

[5] 

Olbrycht, R., Wiȩcek, B., Świątczak, T., “Shutterless method for gain nonuniformity correction of microbolometer detectors,” in Proceedings of the 16th International Conference - Mixed Design of Integrated Circuits and Systems, MIXDES 2009, Article number, 378 –380 (2009). Google Scholar

[6] 

Kopeć, M., Olbrycht, R., Gamorski, P., Kałuża, M., “The influence of air humidity on convective cooling conditions of electronic devices, IEEE Transactions on Industrial Electronics,” 65 (12), 9717 –9727 (2018). Google Scholar

[7] 

Sosnowski, T., Bieszczad, G., Madura, H., Kastek, M., in Digital image processing in high resolution infrared camera with use of programmable logic device, Proceeding of SPIE, 78380U (2010). Google Scholar
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Andrzej Ligienza, Tomasz Sosnowski, Grzegorz Bieszczad, and Jarosław Bareła "Optoelectronic sensor system for recognition of objects and incidents", Proc. SPIE 11442, Radioelectronic Systems Conference 2019, 114420O (11 February 2020); https://doi.org/10.1117/12.2565165
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

Sensors

Thermography

Image enhancement

Image processing

Infrared sensors

Image quality

Back to Top