In order to resolve interior parameters of camera geometry there have been numerous development projects to automate
the process. In photogrammetric community the problem solution has been sought by using coded targets and applying
non-linear model in order to find accurate values for interior camera parameters. An alternative approach, popular
especially in computer vision applications has been to discard the targeting and use existing geometric properties of
scene to solve intrinsic parameters instead i.e. parallel lines and orthogonality of line sets. However, in most cases the
parameters to be solved have been restricted to linear components of camera model. In this paper we compare the
accuracy of two alternative single view calibration approaches with results from multi-station multi-image calibration.
The idea is to study the accuracy and reliability of alternative mathematical models to solve intrinsic camera parameters
from single view geometry.
In the field of forestry, for decades field measurements have bene done in order to estimate tree stem volumes from a certain area. This has been done to get information about the growth of forest in certain time periods. The estimation has been based on sample plots. In a plot, one diameter of a tree stem and its distance from the center of the plot have been measured. Until now the measurements have been made manually with a tape measure. In this paper the procedure to use video measurement for getting the required information will be presented. Coordinate system will be created on site and 3D stem volume models will be estimated based on video image measurements.
KEYWORDS: 3D modeling, Cameras, Video, 3D image processing, Edge detection, Photography, 3D metrology, Smoothing, Detection and tracking algorithms, Sensors
This paper deals with 3D-modeling from images of moving camera. The solution of 3D modeling is based on principle of LSQ-estimation, that the increasing number of observations is inversely proportional to effect of noise to estimation. The idea of the algorithm is to gather observations of linear features from multiple time varying video frames and perform simultaneous intersection and resection, i.e. triangulation, of 3D features. The observations are extracted from images by involving Hough transformation for edges detected by typical edge detector. All remaining pixels are used as observations for estimating feature parameters and intersection points of featuers as well as camera pose and orientation in 3D space. The algorithm presented here is off-line process where observations are gathered as background process. To combine observations from multiple frames feature feature matching has to be performed. To improve robustness of matching operator can add some constraints to matching process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.