With an increasing availability and need of point clouds and 3D urban models, the inclusion of semantic information is becoming more and more important, in order to facilitate the usage and exploitation of such data. Traditional deep learning methods applied to 3D geospatial data suffer of generalisation, adaptation and explainability. Data annotation is also a major bottleneck, being time consuming and prone to errors.
The research topic should work with photogrammetric, RGB-D and LiDAR 3D data and:
(i) investigate self-supervised and unsupervised 3D classification methods, including few-shot or zero-shot learning
(ii) design models that can better adapt and generalise among scenarios
(iii) make 3D semantic segmentation results more explainable.
This research position calls for a highly motivated and skilled researcher who possesses a good combination of computer science, AI and geomatics knowledge.