Image matching and image super-resolution via deep learning
The advancement of 3D depth sensors, such as LIDAR (Light Detection And Ranging) scanners, has provided an effective alternative to traditional CAD-based and image-based approaches for 3D modeling. The output of the 3D depth sensors is generally 3D color point cloud and LIDAR images which is collected at the time of the LIDAR survey. However, in order to model 3D scenes under different conditions requires registering natural images with the registered LIDAR images for 2D image domain approaches. That is not trivial due to the repetition and ambiguity that often occur in man-made scenes as well as the variety of properties different renderings of the same subject can possess such as lighting conditions, content changes, camera sensor types, focal lengths, and exposure values. Morever, creating very high quality 3D models takes very long time to acquire and often also requires high end expensive scanners. This desertation addresses both those obstacles. For registration problem, we propose a 2D-2D matching pipeline that builds upon traditional keypoint matching techniques and uses contextual information and mid-level information to handle these challenging scenarios. For visualization problem, we proposes a novel deep learning based approach that can generate high resolution photo realistic point renderings from low resolution point clouds. The proposed method can generate high quality point rendering images very efficiently and can be used for interactive navigation of large scale 3D scenes as well as image-based localization.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License