|
1.INTRODUCTIONSurveillance of public areas is becoming more crucial than ever before for assuring security of contemporary society. Due to extensive increase in population, automating detection of objects and certain incidents even more so. For this reason many areas are secured by custom designed systems of sensors for surveillance and threat detection. Such systems are usually powered and interconnected via local infrastructure, hence it is problematic to implement such systems in places like parks, farm fields, woods, beaches and other areas remote from city infrastructure. Furthermore, necessity to monitor such areas is often periodical for certain events such as concerts, festivals, sport competitions etc. Therefore a system that would be easily deployed, scalable and working on independent power source would be most desirable. 2.OPTOELECTRONIC SENSOR NETWORK CONCEPTOne of the most critical features of surveillance system is the possibility to work both during day and night. In low visibility conditions like mist or smoke and without any additional illumination as well. All those requirements are satisfied by sensors operating in infrared spectrum, thermal cameras being especially useful. Systems comprising of infrared sensors are capable of localizing, identifying and tracing objects (people primarily) completely passively, ensuring security by monitoring areas in case of certain incidents. It has been decided that system will use thermal imaging cameras with varying resolutions of the detectors: 80x80, 320x240, 640x480, 1024x768. These cameras will transmit data to the surveillance center either via cable or wirelessly. Additionally such a system could be complemented by PIR infrared sensors (as a passive barrier or motion detector) and low light level cameras. Basic functions realized by the system:
By employing Internet of Things (IoT) technology, the system will be easily scalable in relation to the area to be secured, varying in size, terrain and application i.e. monitoring equipment or traffic. Furthermore using IoT enables quick deployment of the system in the given area, that could consist of an extensive amount of sensors in the system’s configuration. Such sensors will automatically configure themselves into dispersed, mesh type network, for which wireless communication is especially desirable to minimize system’s physical complexity. It is planned to apply machine learning algorithms for machine vision [1] data processing and subsequently sensor fusion. It allows to augment obtained information about the secured area increasing its efficiency, as well as to reduce power consumption by employing sensors like PIR or acoustic sensors as triggers for the system when an incident occurs. Using thermal imaging cameras eliminates necessity for additional illumination further reducing energy consumption and its capability to operate in smoke extends its application possibilities i.e. firefighting operations. What is more, infrared region characteristic absorption spectra of various gases might enable detection of dangerous chemicals and gas leaks in the monitored area[2]. Another feature of this technology is remote measurement of radiative properties of observed objects and deriving information about their absolute temperature. It requires additional radiometric calibrations [3], however. It is assumed that the amount of thermal cameras used in a particular dispersed system configurations is directly dependent on area to be covered, count of monitored objects and type of incidents to be detected. During system’s development a preliminary theory of sensor placement and distribution has been worked out, together with sensors’ construction requirements for data processing, fusion and efficient communication. The system for automatic recognition of objects and incidents has to be designed with support of the system’s user in mind. Ability to analyze image in such a way to maximize the amount of useful data relayed to the operator and minimize the time required to understand the information coming from the given area of interest. This leads to further reduction of the amount of people required to operate the system and decrease time to properly analyze the situation and react accordingly. It is expected that automatic recognition algorithms will be able to recognize defined objects and events and differentiate between them and the ones outside of the defined set. The diagram of a typical automatic recognition system has been presented on Figure 1. Objects of interest may be in motion, be partially or fully covered, or camouflaged. Despite these difficulties, the automatic recognition system should be able properly interpret and identify differently defined objects. One of the most difficult problems is classification stage. It is basing on system’s ability to reliably differentiate signature of one specific object from other similar signatures. Usually process of automatic classification, recognition and identification consists of following operations:
3.THERMAL IMAGE PROCESSINGThe primary data obtained from infrared sensors organized in a dispersed surveillance network is in a form of a thermal image. During the process of recognition, classification and identification of objects, image enhancement techniques are often used to improve the quality of information and facilitate proper incident inference. Additionally, external factors like temperature of the environment or its humidity [6] might influence detector’s heat exchange rate, changing its working point and deteriorating the image. Such influence needs to be actively compensated. Furthermore thermal image is not natural for humans and often it causes problems with correct interpretation of the thermal images. Their characteristic properties are:
Image analysis based detection, recognition and identification of objects and incidents is currently one of the most researched topics. It is intended to grant the machines the human like vision together with resulting abilities. Idea of machine vision involves data processing from the sensors to infer useful information from the observed scene. This type of systems can improve both reaction time and probability of making the most appropriate decision by the operators of the systems. Machine vision can be applied to a wide variety of fields like military and defense, biomedical engineering, medicine and surgeries, public health, autonomous transport, production, robotics, entertainment and security systems. The whole process of detection, recognition and identification with use of machine vision requires profound knowledge in various science fields, especially with thermal imaging involved. Furthermore, the analysis of thermal images is made more difficult by the fact, that standard image processing methods are ineffective. Basic task of digital image processing systems, is image enhancement for improving information quality for both human perception as well as automatic recognition systems [7]. Image enhancement techniques are required primarily to process the images to be most useful in intended application, in comparison to the original image. Hence every image correction must be preceded by a decision about which feature needs to be emphasized. Extensive variety of problems that can arise during image correction, prevents use of one universal method. Each type of image requires individual approach for image processing method, supported by the experience of the system’s user. Image enhancement algorithms’ tasks are:
It often happens that aforementioned tasks are counteracting each other. Design process of such system requires extensive experience coupled with a lot of trial and error from its creator. Image quality assessment is subjective. Determination of “good quality” depends heavily on human perception of visual information and type of image analysis algorithms applied further down the image processing pipeline. As image quality enhancement is based more on constructor’s intuition than formalized rules, it is very difficult or even impossible to design an optimal algorithm. Nevertheless, there are general assumptions that allow to separate image quality enhancement methods into three classes: Basic digital image quality improvement operations modify functions of intensity and contrast. Initial neutral transform has been presented on figure 2a. Transform function change influenced by contrast modification has been presented on figure 2b. Contrast increase causes an increase in function’s slope angle, which directly translates to emphasizing differences in intensity levels of individual pixels. The count of intensity levels is finite. Therefore, if the level count is N, contrast increase cuts off at level A. All pixels from original image that reach level A, after transformation will be equalized at level N. Increasing intensity of the image corresponds to translating the slope of the function in vertical direction (figure 2c). It can be interpreted as adding certain constant value to the function, equal to the magnitude of slope’s translation. In consequence some intensity levels won’t occur in the resulting image. These levels have been designated with letter B. The character of the transform function used should be selected adequately for intended application. For example it might be desirable to increase contrast only in the middle range of intensity levels. An exemplary application of thermal image normalization has been shown on Figure 3. Using this operation significantly increased contrast of the image. Figure 3.Example of applying LUT operation for image normalization: original image (a) image after normalization (b). ![]() Image processing involving nonlinear relation of the image’s amplitude is also commonly used. It is especially significant for thermal image processing, where signal amplitude is nonlinearly related to the temperature of the observed object. This is a consequence of band limited Planck’s law. Another method for nonlinear image correction of a thermal image is histogram equalization, especially popular for cameras used just for observation. Histogram equalization is used to fit such a transform function, so that resulting histogram is as leveled as possible. This emphasizes the details on the image that are obscured by low image contrast. However, this method is not universal and for some images it fails to provide satisfying results. An example of histogram equalization’s application has been shown on Figure 4. Another class of commonly used methods are spatial filters, that modify pixel value based on values of surrounding pixels – context. Assuming that image is presented as a pixel value matrix, an image modifying function can be described by the formula: where: L(m,n) – function of input image, L’(m,n) – function of output image, w(i,j) spatial filter coefficient matrix. Therefore, new intensity value for a given pixel is calculated based on surrounding pixels’ values. It works first by defining a mask. Then the mask moves throughout pixels of the image with a certain step size, usually equal to the distance between pixels. Finally it calculates new value for each from the pixels currently within the mask. This operation is repeated for every pixel in the matrix. Using mask with proper coefficients, it is possible to perform various functions, like low-pass filter, high-pas filter, edge detection or edge enhancement (Figure 5) and many others. 4.INFRARED SENSOR SYSTEMThe designed system will be able to passively localize, identify and trace objects to monitor and secure the designated area. A concept system based on miniature thermal cameras with 80x80 pixels detectors has been developed. Cameras will communicate and transmit the data wirelessly to the surveillance center. Additionally it is intended to equip cameras with embedded GPS and electronic compass modules to enable automatic localization and orientation. It is expected that each sensor will be able to operate without external power source for no less than two days. Basic tasks required to be performed by the system were formulated as follows:
It is also consider to include other sensors in the system, like passive infrared sensors (PIR) as a barrier, low light level television (LLL TV) and thermal cameras with high resolution detectors i.e. 640x480 or 320x240. Other non-optical sensors could also be employed, like seismic or acoustic. Using the system as a dispersed, mesh-like network of sensors that would work as internet of things, would make scaling and deployment of the system significantly easier and efficient. The advantage is that the system’s configuration could be automatized and fast installation – for example via airdrop would be enabled. Embedded navigation sensors would localize and determine orientation of each sensor in relation to its absolute position as well as position relative to other sensors. This would better convey information about covered area and generate accurate position data of detected objects in range. Sensors used in the system will have exchangeable, camouflaging enclosures resembling rocks, cans, bricks, tree logs etc. depending on the terrain of operation. i.e. woods, desert, mountains). The amount and kind of sensors used in a particular system configuration will be heavily dependent on the area to be secured up to a radius of several kilometers. For example a region of 90000 m2 can be covered with at least 4 80x80 thermal cameras with 20° field of view (FOV). Exemplary distributions of four 80x80 cameras to cover this area with different geometries, has been presented on Figure 6. Figure 6.Examples of distributing four 80x80 cameras with FOV 20°, with their fields of view(marked red) for areas of surveillance: 300 x 300 m (a), 450 x 200 m (b) and 600 x 150 m (c). ![]() One of the most important parameters determining the effectiveness of the infrared sensors system, are ranges of detection, recognition and identification of the observed objects. Three types of targets have been selected for research simulation. A human (size 1.8 m x 0.6 m), car (size 2.3 m x 2.3 m) and HGV (size 4.0 m x 2.5 m). These parameters are standardized for thermal imaging devices’ evaluation. Results of the simulation has been presented in tables 1, 2 and 3. Additionally, images of various objects positioned at derived detection, recognition and identification distances, have been acquired by cameras designed with calculated parameters. Analysis of recorded thermal images, conducted by a group of designated observers has confirmed validity of the assumed range parameters. Demonstrative result of the acquisition has been designated with a (*) symbol in Table 2 and shown on Figure 7 Table 1.Detection, recognition and identification ranges for human target, using different thermal cameras.
Table 2.Detection, recognition and identification ranges for car target, using different thermal cameras.
Table 3.Detection, recognition and identification ranges for HGV target, using different thermal cameras.
5.CONCLUSIONA conceptual optoelectronic system for automatic identification of objects and incidents has been presented in this article. It is assumed that sets of thermal cameras will be main components comprising the system, which will be able to detect threats and objects in low visibility conditions, at night especially. It has been determined and pointed out which methods and algorithms need to be implemented to achieve desired results. Camera parameters (like FOV) have been determined by analyzing possibilities to secure area of no less than 90000 m2. Detection, recognition and identification ranges – which are the most crucial parameters – have been determined for targets like human, car and HGV. Based on calculations and simulations, cameras designed according to these parameters will satisfy range requirements. Furthermore it has been discovered that just four 80x80 resolution and one 640x480 resolution cameras are enough to fully cover area of 90000 m2. Performance of 320x240 resolution camera is very similar to 80x80 resolution camera. Hence use of 320x240 resolution camera is both technically and economically unwarranted. REFERENCESStrąkowski, R., Pacholski, K., Więcek, B., Olbrycht, R., Wittchen, W., Borecki, M., Estimation of FeO content in the steel slag using infrared imaging and artificial neural network, Measurement: Journal of the International Measurement Confederation, 117 380
–389
(2018). Google Scholar
Olbrycht, R., Kałuża, M., Wittchen, W., Borecki, M., Więcek, B., De Mey, G., Kopeć, M., Gas identification and estimation of its concentration in a tube using thermographic camera with diffraction grating, Quantitative InfraRed Thermography Journal, 15
(1), 106
–120
(2018). Google Scholar
Hots, N., Investigation of temperature measurement uncertainty components for infrared radiation thermometry, Advances in Intelligent Systems and Computing, 543 556
–566
(2017). Google Scholar
Michał Krupiński, Grzegorz Bieszczad, Tomasz Sosnowski, Henryk Madura, Sławomir Gogler,
“Nonuniformity correction in microbolometer array with temperature influence compensation,”
Metrol. Meas. Syst., XXI
(4), 709
–718
(2014). https://doi.org/10.2478/mms-2014-0050 Google Scholar
Olbrycht, R., Wiȩcek, B., Świątczak, T.,
“Shutterless method for gain nonuniformity correction of microbolometer detectors,”
in Proceedings of the 16th International Conference - Mixed Design of Integrated Circuits and Systems, MIXDES 2009, Article number,
378
–380
(2009). Google Scholar
Kopeć, M., Olbrycht, R., Gamorski, P., Kałuża, M.,
“The influence of air humidity on convective cooling conditions of electronic devices, IEEE Transactions on Industrial Electronics,”
65
(12), 9717
–9727
(2018). Google Scholar
Sosnowski, T., Bieszczad, G., Madura, H., Kastek, M.,
in Digital image processing in high resolution infrared camera with use of programmable logic device, Proceeding of SPIE,
78380U
(2010). Google Scholar
|