Yes, you can.
For the most accurate results it is best to use photos from a calibrated camera. If you have photos from a camera that has not yet been calibrated, the best option is to obtain the camera and calibrate it using the same settings (such as zoom) that were used when the original photos were taken.
If you do not have access to the camera and only have photos (scanned prints or digital images) that were taken by someone using a camera that you know nothing about there are a few ways to proceed. This article briefly describes these ‘Inverse Camera’ methods. The PhotoModeler User Guide and Help file describe the process(es) in detail and there are also video tutorials available that show how it’s done.
The main idea behing ‘Inverse Camera’ is to solve the camera parameters (such as focal length and format size) at the same time the 3d project data is being solved. To do this, additional information is required.
1. Using Control Points: You can use control points to solve the camera when processing if you have known 3D positions of several points in the photo(s), or you have several known dimensions and are able to calculate the 3D positions of several points in the photo relative to each other. You can obtain control points from a survey or by doing a regular PhotoModeler project of an ‘exemplar’ object and then importing the model as control points.
You can also calculate 3D coordinates, for example, by using a large brick wall with known brick and grout dimensions by using the 2D dimensions and setting the third dimension to 0 assuming they are all on a plane.
You need a minimum of 4 control points (more is better) covering a substantial part of the photo. Once a control project is processed and the camera is solved, you can extend surfaces and mark features on surfaces using surface draw. You can then also turn off the Inverse Camera flags on the photos (so that the camera solution stays constant) and proceed with marking and referencing points across photos, as you would with a regular PhotoModeler project. Since you are using known coordinates, your scale will be implicitly defined.
2. Using shapes: If your photo(s) show distinct shapes (or ‘primatives’ such as boxes, pyramids, wedges etc) in perspective, the Shape Marking feature in PhotoModeler will allow you to solve the camera and calculate the camera positions. The shape in your photo does not need to match any specific dimension, it just needs to conform to the shape’s parameters (e.g. a box needs right angles at each corner). Once you mark enough of the edges and/or vertices of your shape on the photo, PhotoModeler will use your markings to approximate the camera parameters and solve the camera orientation.
You can then use the surface draw feature to extend surfaces along flat planes in your photos and use the Measure tool to measure these features on these planes. Measurment in PhotoModeler requires that you set a scale by using a known dimension in the photo. Shapes can also be photo textured.
3. Using Constraints: If your photo(s) have strong 3-Point perspective with horizontal (left-right and front-back) and vertical features that you can mark, you can use ‘axes constraints’ to constrain marked features. Once the camera is solved, you can then mark surfaces to connect the constrained items, and then use the Surface Draw tool to mark and measure features on the surfaces. Measurement in PhotoModeler requires that you set a scale by using one known dimension in the photo. This could be set using a standard feature dimension such as a manhole cover diameter or the wheel base of a known vehicle, etc.
These are 3 Inverse Camera processing methods that can be used when using photos from an unknown camera. For highest accuacy, it is best to use a camera calibrated in PhotoModeler, but these methods provide alternative when you do not have access to the camera that took the photos.