Single view reconstruction and camera calibration using 2D room scene image
Metadata[+] Show full item record
[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT AUTHOR'S REQUEST.] Single view reconstruction is a fundamental problem in computer vision. One aim is to automatically reconstruct 3D model from a single view 2D room image scene. For humans, we have amazing ability to instantly grasp the structure of a room scene, even in presence of some occluding objects. However, this is not quite a simple task for a computer which must be guided by certain algorithms. User interaction is a way to help the computer interpret the scene, but there is always a case that we want to get the final result by just one click of the mouse without too much user interaction. To achieve this goal, I synthesize many relevant approaches and come up with some novel methods to improve the result. In this thesis, I first try to find line segments of the input room scene image by applying Canny edge detector and linking edge pixels. The parameters of Canny edge method can be adjusted to control the number of line segments. Another parameter is minimal length which guarantees that all the detected line segments are longer than user-defined threshold. The next step is using these line segments to estimate three vanishing points. In a perspective image, parallel lines in the real world appear to converge to a single point, called vanishing point, which provide information about the third dimension. Once the computation of three vanishing points is done, the camera can be calibrated by approximating the focal length, principal points. A very crucial step for automatic reconstruction is the estimation of the room layout which is indispensable information to know the exact areas of the walls, floor, and ceiling. The final step is to reconstruct each plane of the room using calibrated camera information and map texture to each face of the room according the original input image. Experiments show that our system works well for most cases. The performance depends greatly on the accuracy of the automatically estimated layout. There are two ways to improve it, one is modification by user and the other one is automatic modification by system. The final 3D model with texture is shown in OpenGL area which can be zoomed in, zoomed out, or rotated by the user.
Access is limited to the University of Missouri - Columbia.