BEGIN:VCALENDAR BEGIN:VEVENT SUMMARY:CS Student Colloquium: Zhenzhen Gao - City-Scale Aerial LiDAR Point Cloud Visualization DESCRIPTION:Speaker: Zhenzhen Gao, USC Talk Title: City-Scale Aerial LiDAR Point Cloud Visualization Series: Student Seminar Series Abstract: Aerial LiDAR (Light Detection and Ranging) is cost-effective in acquiring terrain and urban information by mounting a downward-scanning laser on a low-flying aircraft. It produces huge volumes of unconnected 3D points. This thesis focuses on the interactive visualization of aerial LiDAR point clouds of cities, which is applicable to a number of areas including virtual tourism, security, land management and urban planning.\n \n A framework needs to address several challenges in order to deliver useful visualizations of aerial LiDAR cities. Firstly, the data is 2.5D, in that the sensor is only able to capture dense details of the surfaces facing it, leaving few samples on vertical building walls. Secondly, the data often suffers from noise and under-sampling. Finally, the large size of the data can easily exceed the memory capacity of a computer system.\n \n This thesis first introduces a visually-complete rendering framework for aerial LiDAR cities. By inferring classification information, building walls and occluded ground areas under tree canopies are completed either through pre-processing point cloud augmentation or through online procedural geometry generation. A multi-resolution out-of-core strategy and GPU-accelerated rendering enable interactive visualization of virtually unlimited size data. With adding only a slight overhead to existing point-based approaches, the framework provides comparable quality to visualizations of off-line pre-computation of 3D polygonal models.\n \n The thesis then presents a scalable out-of-core algorithm for mapping colors from aerial oblique imagery to city-scale aerial LiDAR points. Without intensive processing of points, colors are mapped via a modified visibility pass of GPU splatting, and a weighting scheme leveraging image resolution and surface orientation.\n \n To alleviate visual artifacts caused by noise and under-sampling, the thesis shows an off-line point cloud refinement algorithm. By explicitly regularizing building boundary points, the algorithm can effectively remove noise, fill gaps, and preserve and enhance both normal and position discontinuous features for piece-wise smoothing buildings with arbitrary shape and complexity.\n \n Finally, the thesis introduces a new multi-resolution rendering framework that supports real-time refinement of aerial LiDAR cities. Without complex computation and without user interference, simply based on curvature analysis of points of uniform sized spatial partitions, hierarchical hybrid structures are constructed indicating whether to represent a partition as point or polygon. With the help of such structures, both rendering and refinement are dynamically adaptive to views and curvatures. Compared to visually-complete rendering, the new framework is able to deliver comparable visual quality with less than 8\% increase in pre-processing time and 2-5 times higher rendering frame-rates. Experiments on several cities show that the refinement improves rendering quality for large magnification under real-time constraint.\n Host: CS PHD Committee DTSTART:20140403T160000 LOCATION:SAL 101 URL;VALUE=URI: DTEND:20140403T170000 END:VEVENT END:VCALENDAR