PreviousNext
Help > Building a 3D Model > Processing Points-Based > Adjustment and Processing > Minimums > Minimums with Control Points > Common Scenarios
Common Scenarios

Control Data (external knowledge of 3D locations) can be generated by survey instruments, GNSS/GPS receivers, or by other photogrammetric projects (including output from another PhotoModeler project). There are a few common ways that this Control Data is used in PhotoModeler:

        A single photo project - mark four or more control points on the photo.

        Using Inverse Camera to solve camera parameters - mark five or more control points for each photo with Inverse Camera settings (as set on Photo Properties).

        A project with a larger number of photos with weak camera station geometry - e.g. a long wall or road with a string of photos but no overview photos - mark four or more control on several photos spread through project. This would commonly be four or more control marked on the first photos of sequence, another four or more control on the last photos, and maybe some control on photos spread in-between. The other photos would need sufficient referenced points.

        A project with a strong camera station network (good overlap, good angles, etc.) with the need to put the project in a known coordinate system defined by external points - assign a total of four or more control points anywhere in scene to points that will be 3D, and no photos have four control points marked, OR the recommended method, since it does not warp the project, is to not use control points, but instead a use a multi-point transform (see Imports and Coordinate Systems Pane and Import or Add Coordinate Systems Dialog and Define the coordinate system and  Using known XYZ points).

        A scenario much like ‘4’ above but with a clarification – a large SmartMatch project where the project is solved by standard automated SmartMatch processing, but then the project needs to match the control system defined by a number of known external control points.  The best approach is to a) ensure 4 or more of these points are imaged on 2 or more photos, b) mark and reference these points so they become 3D (not control point marking), c) import the control points into the Imports and Coordinate Systems Pane, and d) use a multi-point transform to associate the known external points with the solved marked points from b).

        A project with an imported (or created) point cloud where you wish to pull in additional photos and orient them against the point cloud for the purpose of texturing the point cloud. You would use the point cloud as a source of control points and mark 4 or more control points on each photo you want to be used as a texture source. No overlap is required.

        A project can have control data applied to camera station positions (either as a Helmert Transform or as Control Data). Here we refer to Control Data use.  This camera station data can come from a UAV’s on-board GNSS/GPS receiver, or via GNSS/GPS post-processing methods.  It can be imported from EXIF image headers or a separate file. If there are four or more non-collinear camera stations with assigned control positions, the orientation and optimization can use this data to control the solution - in cases improving the solution, improving camera calibration, placing the project into a geographic coordinate system, or reducing the need for ground control (control points measured on the object or the ground under a UAV). Control assignments on camera stations can be mixed with ground control assignments. Ground control assignments should be to points marked/referenced on two or more photos.