3D data visualization is a non-trivial effort, however high-quality data processing and visualization is crucial in all spheres of computer vision tasks, especially if our tasks include work with real environment and require precise results. Many industries can benefit from automated object detection and its analysis. Effective environment information retrieving and its digitization open up great prospects in robotics and in the design of such systems that require scene reconstruction into point clouds. This solution offers new possibilities for mixed reality systems also. For example, with restored scene data we can add a virtual light source and illuminate the room, or it becomes possible to cast reflections of virtual objects in mirrors. A breakthrough in neural networks training on point clouds occurred recently after the "PointNet" architecture implementation, and the trend in working with 3D data continues to grow. Current research is aimed at implementing the interior objects recognition and 3D reconstruction approach that works with interior scenes and low-quality incomplete information from lidars. This method enables the selection of interior objects from the scene as well as the determination of their location and dimensions. PointNet neural network architecture trained on the ScanNet dataset was used to annotate and segment the point cloud. To create a triangle grid, the neural network "Total3D understanding" was employed. As a result, was built an interior environment reconstruction method using RGB images and point clouds as input data. A simple interior of a room reconstruction example is provided, along with the result quality assessment.
|