KEYWORDS: 3D modeling, Nonuniform sampling, 3D image processing, Multilayers, Data modeling, 3D acquisition, Clouds, Image quality, Optical engineering, 3D displays
The representation of a three-dimensional (3-D) scene is essential in multiview imaging technologies. We present a unified geometry and texture representation based on global resampling of the scene. A layered data map representation with a distance-dependent nonuniform sampling strategy is proposed. It is capable of increasing the details of the 3-D structure locally and is compact in size. The 3-D point cloud obtained from the multilayered data map is used for view rendering. For any given viewpoint, image synthesis with different levels of detail is carried out using the quadtree-based nonuniformly sampled 3-D data points. Experimental results are presented using the 3-D models of reconstructed real objects.
Motion blur is a result of finite acquisition time of practical cameras and the relative motion between the camera and moving objects. We present a method for speed measurement of spherical objects using motion blurred images captured by a digital camera. The object is assumed under a straight line uniform-velocity motion, and the speed is calculated according to the imaging geometry and blur extent estimates. We have established a link between the motion blur information of a 2-D image and the speed information of a moving object. Experimental results are presented for the real-scene images.
Finding the distance of an object in a scene from intensity images is an essential problem in many applications. In this work, we present a novel method for depth recovery from a single motion and defocus blurred image. Under the assumption of uniform lateral motion of the camera during finite exposure time, both the pinhole model and the camera with a finite aperture are considered. It is shown that the image blur produced by uniform linear motion of the camera is inversely proportional to the distance of the object. Furthermore, if the speed of the relative motion is known, the depth of the object can be acquired by identifying the blur parameters. An image blur model is formulated based on geometric optics. The blur extent is estimated by intensity profile analysis and focus measurement of the deblurred images. The proposed method is verified experimentally using different types of test patterns in an indoor environment.
KEYWORDS: 3D modeling, Visual process modeling, 3D image processing, 3D acquisition, Cameras, 3D vision, Image registration, Volume rendering, Data modeling, Data acquisition
A desktop vision system is presented for complete 3-D model reconstruction. It is fast (3-D reconstruction in under 20 min), low cost (uses a commercially available digital camera and a rotation stage), and accurate (about 1 part in 500 in the working range). Partial 3-D shapes and texture information are acquired from multiple viewing directions using rotational stereo and shape-from-focus (SFF). The resulting range images are registered to a common coordinate system, and a surface representation is created for each range image. The resulting surfaces are integrated using an algorithm named region of construction. Unlike previous approaches, the region of construction algorithm directly exploits the structure of the raw range images. The algorithm determines regions in the range images corresponding to nonredundant surfaces that can be stitched along the boundaries to construct a complete 3-D surface model. The algorithm is computationally efficient and less sensitive to registration error. It also has the ability to construct complete 3-D models of complex objects with holes. A textured 3-D model is obtained by mapping texture information onto the complete surface model representing the 3-D shape. Experimental results for several real objects are presented.
KEYWORDS: 3D modeling, 3D image processing, Volume rendering, Cameras, Visual process modeling, 3D metrology, Image registration, 3D acquisition, Imaging systems, Reconstruction algorithms
New algorithms are presented for automatically acquiring the complete 3D model of single and multiple objects using rotational stereo. The object is placed on a rotation stage. Stereo images for several viewing directions are taken by rotating the object by known angles. Partial 3D shapes and the corresponding texture maps are obtained using rotational stereo and shape from focus. First, for each view, shape from focus is used to obtain a rough 3D shape and the corresponding focused image. Then, the rough 3D shape and focused images are used in rotational stereo to obtain a more accurate measurement of 3D shape. The rotation axis is calibrated using three fixed points on a planar object and refined during surface integration. The complete 3D model is reconstructed by integrating partial 3D shapes and the corresponding texture maps of the object from multiple views. New algorithms for range image registration, surface integration and texture mapping are presented. Our method can generate 3D models very fast and preserve the texture of objects. A new prototype vision system named Stonybrook VIsion System 2 (SVIS-2) has been built and used in the experiments. In the experiments, 4 viewing directions at 90-degree intervals are used. SVIS-2 can acquire the 3D model of objects within a 250 mm x 250 mm x 250 mm cubic workspace placed about 750 mm from the camera. Both computational algorithms and experimental results on several objects are presented.
KEYWORDS: 3D modeling, 3D image processing, Image analysis, Cameras, 3D acquisition, Data modeling, Volume rendering, Imaging systems, Data acquisition, 3D metrology
We present a digital vision system for acquiring the complete 3D model of an object from multiple views. The system uses image focus analysis to obtain a rough 3D shape of each view of an object and also the corresponding focused image or texture map. The rough 3D shape is used in a rotational stereo algorithm to obtain a more accurate measurement of 3D shape. The rotational stereo involves rotating the object by a small angle to obtain stereo images. It offers some important advantages compared to conventional stereo. A single camera is used instead of two, the stereo matching is easier as the field of view remains the same for the camera (but the object is rotated), and camera calibration is easier since a single stationary camera is used. The 3D shape and the corresponding texture map are measured for 4 views of the object at 90 degree angular intervals. These partial shapes and texture maps are integrated to obtain a complete 360 degree model of the object. The theory and algorithms underlying rotational- stereo and integration of partial 3D models are presented. The system can acquire the 3D model (which includes the 3D shape and the corresponding image texture) of a simple object within a 300mm x 300mm x 300mm volume placed about 600 mm from the camera. The complete model is displayed using a 3D graphics rendering software (Apple’s QuickDraw 3D Viewer). Both computational algorithms and experimental results on several objects are presented.
KEYWORDS: 3D modeling, Cameras, Visual process modeling, 3D image processing, Image analysis, 3D acquisition, 3D vision, Image restoration, Imaging systems, Computing systems
A digital vision system and the computational algorithms used by the system for three-dimensional (3D) model acquisition are described. The system is named Stonybrook VIsion System (SVIS). The system can acquire the 3D model (which includes the 3D shape and the corresponding image texture) of a simple object within a 300 mm X 300 mm X 300 mm volume placed about 600 mm from the system. SVIS integrates Image Focus Analysis (IFA) and Stereo Image Analysis (SIA) techniques for 3D shape and image texture recovery. First, 4 to 8 partial 3D models of the object are obtained from 4 to 8 views of the object. The partial models are then integrated to obtain a complete model of the object. The complete model is displayed using a 3D graphics rendering software (Apple's QuickDraw). Experimental results on several objects are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.