Projet de recherche
Conception d'un réseau de neurones profond adapté aux nuages de points géospatiaux pour la cartographie 3D de grands espaces urbains
A common representation for sensory data and 3D objects and scenes is "Point Cloud" that can be obtained by terrestrial LiDAR or airborne LiDAR. Developments in these sensing technologies make point clouds more reliable and accurate. They are already in use in many applications such as autonomous vehicle navigation, and augmented/virtual reality and urban management.
LiDAR point clouds have reliable depth information, which in contrast to images can be used to accurately localize objects and characterize their shapes. On the other hand, they are unstructured and sparse and have highly variable point density which makes them more difficult to process. However, identifying the semantic meaning of the observed 3D structure is necessary for many applications that are mentioned above. Consequently, these problems have attracted a lot of attention and researchers.
Some approaches used manually crafted features representations for point clouds. In some other approaches, point clouds are projected into a perspective view and then image-based feature extraction techniques are applied. In other approaches, point clouds are rasterized into 3D voxel grid and then each voxel is encoded with handcrafted features. A major breakthrough in recognition and detection tasks on images was due to moving from hand-crafted features to machine-learned features. In the new methods, point clouds can be represented in a regular volumetric grid (voxels) in order to apply 3D Convolutional Neural Networks (CNNs) in an end-to-end fashion.
Still, these approaches are computationally expensive and time-consuming and also cannot be used for outdoor terrestrial LiDAR point clouds. Therefore, finding approaches that can directly operate on point cloud data is highly desirable, since it avoids costly preprocessing and format conversion steps. However, the question of what is the best network architecture to process unstructured 3D point clouds is still largely open.
In this work, we want to tackle these problems and find the best approach to design and develop a deep learning architecture adapted to terrestrial mobile mapping LiDAR to outdoor point clouds registered in large-scale urban environments.
Direction: Sylvie Daniel
Codirection : Denis Laurendeau