The pace of development of information systems nowadays demonstrates the magnitude of the demand for digitization of all aspects of our lives, such as medicine, industry, and documentation of cultural heritage. Digitization is the process of converting objects from the real world into their digital representations. In order to acquire complete and detailed information about the whole surface of an object, several 3D scans have to be taken from different perspectives. The resulting 3D object can be acquired in a form of a numerous amount of 3D point clouds overlaying each other. Sometimes, depending on a quality of a 3D scanner and surface properties, the point clouds can represent a noisy geometrical surface and an incorrect colour. Moreover, the directional point clouds are not perfectly aligned and a registration between them must be applied. The registration of the point clouds is a complex task which is not always possible to automate. Usually, the entire process of registration has to be supervised by a skilled operator. The registration is usually divided into two parts: initial and final matching. Initial matching is a more complex one and in this scenario, it is supported by the known system calibration, which includes, e.g., robotic arm, head of the scanner, sources of lights. Using ICP based algorithms afterward is usually enough to get appropriate final matching. The difficulty of point cloud registration increase accordingly to the number of directional clouds of points to integrate. The aim of this paper is to propose a methodology to decrease or even fully eliminate some of the presented registration issues encountered during the reconstruction of Museum of King Jan III’s Palace at Wilanów.
Rapid development of machine learning techniques opens new application fields for Unmanned Aerial Vehicles technology, which include detection and classification of objects. It is possible to detect buildings, vehicles or various objects present near pipelines and industrial buildings. In some cases, such as monitoring of the critical infrastructure, accuracy of the detection is crucial. 2D data classification enables detecting an object and determining its basic parameters. 3D data, that can be obtained from drones, supplement 2D data, and can significantly increase the accuracy of detection and classification of objects. It also bares additional information and can simplify determination of dimensions of already classified objects. Furthermore, some objects, difficult for classification using 2D images, can be easily classified with 3D data. Such objects are for example: excavations in the ground, objects partially overshadowed by trees or fully covered by dried leaves. 3D data collected by drones is typically obtained with SfM (Structure from Motion) and Lidar (Light Detection and Ranging) methods. SfM provides three-dimensional data from the photos that have been collected for 2D analysis. The advantage of this method is high quality texture. The main problem is that this method is not useful for night flights due to lack of feature points on images. Lidar is a laser measurement method using data on the time of flight of a laser beam reflected from an obstacle (object). It allows to obtain 3D data in all light conditions. However, collected data does not have color information. The combination of both methods will provide dense and accurate point clouds with texture, which can be consequently used for detection and classification of objects. In this paper a pipeline for acquisition, merging and processing of 3D data gathered by drones is presented. The first step is to obtain assembled point clouds from Lidar in one coordinate system using GPS data. Then Lidar point cloud is integrated with SfM point clouds. 3D data generated this way also includes coordinates of camera in the moments when SfM photos were collected. The full 3D model of monitored area containing GPS coordinates and positions of camera may be used to simplify configuration of a supplementary flight in order to measure places where no measurement data was obtained or the density of point cloud was too low. Having a point cloud of the reconstructed object prepared in such way, it is possible to compare point clouds, features extracted from point clouds and geometry of already classified objects over time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.