Dense Surface Modeling is a search algorithm that uses an existing pre-oriented project and photos from that project to search for image patches that ‘look’ alike. The MVS method uses multiple photos, while the Paired Photos method works in photo pairs. When a good match is found between photos, the orientation and camera data allows the program to compute the correct 3D location of the surface point corresponding to the image patch.
The basic steps of the DSM process are:
• Start with an oriented and high-quality PhotoModeler project with low residuals and at least two or more camera stations with suitable base-to-height ratios. See Select Best Photo Pairs Dialog.
• Define the extents of the search using DSM Trims (a trim can include the defined area or optionally exclude it), using the entire photo boundary (PhotoModeler will define the combined area of the selected images), or using one or more approximate surfaces (these surfaces can be path surfaces, NURBS surfaces, surface draw connected surfaces, or even cylinders, see DSM Approximate Surfaces). If using MVS, it usually not necessary to define extents, though it will use DSM Trims if they have been defined.
• Run the DSM process with appropriate settings. Steps 4 though 11 are carried out by the program automatically.
• The DSM algorithm first collects the photographs to be processed.
• Depending on the DSM method, small features in images are matched between photos to build up a point cloud.
• The Paired Photo algorithm searches along a row of the destination image using an NxN patch of imagery from the source image. Where it finds a good match it records it. The search range is controlled by the depth range parameters in the DSM dialog, which is based on the selected surface or, if no surface is selected, an implicit surface defined by PhotoModeler using the 3D points in the project (see DSM Approximate Surfaces). The step size in the source image is defined by the sampling rate parameter in the DSM Dialog.
• The MVS algorithm extracts and matches a dense set of feature points across multiple photos starting with SmartMatch points and expanding from there. This method works well with a large number of overlapping photos.
• All the matches are then optimized for the best overall fit, throwing out bad and weak matches, plus handling occlusions and large depth changes, etc.
• A subpixel refinement is carried out for the matches.
• The matched positions are used to create 3D points using camera station information.
• If more than one pair of photographs is being run, the results of each point cloud are then registered and merged into one cohesive point cloud.
• The data clean up and meshing steps then convert the point cloud into a triangulated mesh surface.
• In some projects noise is inevitable (noise is 3D points solved off the correct surface). High quality projects minimize this but there may be background imagery that gets in the way to create noise. In these cases, the point meshes can be created by DSM without a meshing step. Then using the manual point edit mode you delete the unwanted points. Then you run the registration, meshing and smoothing as a separate step.