Laser powder bed fusion is at the forefront of manufacturing metallic objects, particularly those with complex geometries or those produced in limited quantities. However, this 3D printing method is susceptible to several printing defects due to the complexities of using a high-power laser with ultra-fast actuation. Accurate online print defect detection is therefore in high demand, and this defect detection must maintain a low computational profile to enable low-latency process intervention. In this work, we propose a low-latency LPBF defect detection algorithm based on fusion of images from high-speed cameras in the visible and Short-Wave Infrared (SWIR) spectrum ranges. First, we design an experiment to print an object while both imposing porosity defects on the print and recording the laser’s melt pool with the high-speed cameras. We then train variational autoencoders on images from both cameras to extract and fuse two sets of corresponding features. The melt pool recordings are then annotated with pore densities extracted from the printed object’s CT scan. These annotations are then used to train and evaluate the ability of a fast neural network model to predict the occurrence of porosity from the fused features. We compare the prediction performance of our sensor fused model with models trained on image features from each camera separately. We observe that the SWIR imaging is sensitive to keyhole porosity while the visible-range optical camera is sensitive to lack-of-fusion porosity. By fusing features from both cameras, we are able to accurately predict both pore types, thus outperforming both single camera systems.
High Throughput JPEG 2000 (HTJ2K, JPEG 2000 Part 15) is a new part of the JPEG 2000 family of standards. HTJ2K provides an alternate block coding algorithm that can be used in place of the EBCOT algorithm, which is used in the JPEG 2000 Part 1. Similar to JPEG and HEVC-Intra mode, a desired option for the proposed HTJ2K (which is not available in the previous versions of the JPEG 2000) is to have a simplified Quality Factor (QF) to steer the compression process. This way the user can modify the compression level by simply increasing or decreasing the value of the QF without knowing the exact underlying compression ratio or the utilized bitdepth. In this paper, we report the results of a subjective experiment, which was designed and implemented to test a proposed QF function for the new HTJ2K encoder, according to the Q factors of the JPEG encoder. A new test methodology for this purpose is utilized and a new formulation of the Q-factor is derived for HTJ2K based on the experimental results. The statistical analysis of the experimental results beside the procedure for deriving the QF range are provided. The derived QF range in this experiment is planned to be integrated in the implementations of the High Throughput JPEG 2000.
Holographic imaging modalities are gaining increasing interest in various application domains ranging from microscopy to high-end autostereoscopic displays. While much effort has been spent on the development of the optics, photonics and micro/nano-electronics that enable the design of holographic capturing and visualization devices, relatively few research effort has been targeted towards the underlying signal processing. One significant challenge relates to the fact that the data volumes needed in support of this kind of holographic applications is rapidly increasing: for visualization devices, and in particular holographic displays, unprecedented resolutions are desired resulting in huge bandwidth requirements on both the communication channels and internal computing and data channels. An additional challenge relates to the fact that we are handling an interference-based modality being complex amplitude based in nature. Both challenges lead to the fact that for example classic data representations and coding solutions fail to handle holographic data in an effective way. This paper attempts to provide some insights that enable to alleviate or a least reduce these bottlenecks and sketch an avenue for the development of efficient source coding solutions. Moreover, it will also outline the efforts the JPEG committee is undertaking in the context of the JPEG Pleno standardization programme to roll out a path for data interoperability of holographic solutions.
In this research, we have adapted our recently proposed Versatile Similarity Measure (VSM) for holographic data analysis. This new measure benefits from nice mathematical properties like boundedness to [0;1], relative error weighting based on the magnitudes of the signals, steerable similarity between original and negative phase; symmetry with respect to ordering of the arguments and the regularity of at least a continuous function. Utilizing its versatile design, here we present a set of VSM constructions specifically tailored to best fit the characteristics of complex wavefield of holograms. Also performance analysis results are provided by comparing the proposed constructions as fast, stand-alone perceptual quality predictors to few available competitors of the field, namely MSE and the average SSIM of the real and imaginary parts of holograms. Comparing their visual quality prediction scores with the mean opinion scores (MOS) of the hologram reconstructions shows a significant gain for all of the VSM constructions proposed in this paper, paving the way towards designing highly efficient perceptual quality predictors for holographic data in the future and also representing the potential of utilizing VSM for other applications working with complex valued data as well.
Digital holography is mainly used today for metrology and microscopic imaging and is emerging as an important potential technology for future holographic television. To generate the holographic content, computer-generated holography (CGH) techniques convert geometric descriptions of a 3D scene content. To model different surface types, an accurate model of light propagation has to be considered, including for example, specular and diffuse reflection. In previous work, we proposed a fast CGH method for point cloud data using multiple wavefront recording planes, look-up tables (LUTs) and occlusion processing. This work extends our method to account for diffuse reflections, enabling rendering of deep 3D scenes in high resolution with wide viewing angle support. This is achieved by modifying the spectral response of the light propagation kernels contained by the look-up tables. However, holograms encoding diffuse reflective surfaces depict significant amounts of speckle noise, a problem inherent to holography. Hence, techniques to improve the reduce speckle noise are evaluated in this paper. Moreover, we propose as well a technique to suppress the aperture diffraction during numerical, viewdependent rendering by apodizing the hologram. Results are compared visually and in terms of their respective computational efficiency. The experiments show that by modelling diffuse reflection in the LUTs, a more realistic yet computationally efficient framework for generating high-resolution CGH is achieved.
Recently several papers reported efficient techniques to compress digital holograms. Typically, the rate-distortion performance of these solutions was evaluated by means of objective metrics such as Peak Signal-to-Noise Ratio (PSNR) or the Structural Similarity Index Measure (SSIM) by either evaluating the quality of the decoded hologram or the reconstructed compressed hologram. Seen the specific nature of holograms, it is relevant to question to what extend these metrics provide information on the effective visual quality of the reconstructed hologram. Given that today no holographic display technology is available that would allow for a proper subjective evaluation experiment, we propose in this paper a methodology that is based on assessing the quality of a reconstructed compressed hologram on a regular 2D display. In parallel, we also evaluate several coding engines, namely JPEG configured with the default perceptual quantization tables and with uniform quantization tables, JPEG 2000, JPEG 2000 extended with arbitrary packet decompositions and direction-adaptive filters and H.265/HEVC configured in intra-frame mode. The experimental results indicate that the perceived visual quality and the objective measures are well correlated. Moreover, also the superiority of the HEVC and the extended JPEG 2000 coding engines was confirmed, particularly at lower bitrates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.