Intro to Photo Processing with Agisoft Metashape for 3D Model Making

Introduction

Agisoft Metashape (available in the Digital Studio) is a stand-alone software product that performs photogrammetric processing of digital images and generates 3D spatial data (point clouds, meshes, etc.) to be used in GIS applications, cultural heritage documentation, and visual effects production, as well as for indirect measurements of objects of various scales. Its user-friendly workflow sets it above many of the other 3D model-processing platforms.

To create a 3D model from photographs takes only a few easy steps, which are nicely summarized for you in Metashape's "Workflow" dropdown. It even grays out options that are not available to you yet!

In this introduction, we will go step-by-step through the creation of a simple 3D model, in this case using photos taken of the Doug Flutie "Hail Flutie" statue taken outside of Boston College's Alumni Stadium in Fall 2020 (evident from the mask on Doug's face). This model will be "in-the-round," as it were, but not a full 360 degrees, as the statue is fixed in the ground.

These steps are:

  1. Adding your photos

  2. Aligning your photos/building a dense cloud (if necessary)

  3. Building your mesh

  4. Building your texture

  5. Making your model to Scale

  6. Exporting your model

Let's get started!

Adding your Photos

Once you are finished taking photos of an object with your camera or camera phone, you will want to copy the images off of your device and place them in a recognizable location on your computer.

For this example, I have created a folder called "Flutie" and placed it on my desktop. Inside of it are all the photos I took of the statue. For tips on taking photos of objects either inside the Digital Studio lightbox or "in the wild," check out our page on Tips and Tricks for Taking Photos for 3D Model Creation or our BC Digital Scholarship Workshop video.

Now that the photos have been offloaded to the computer, it is time to add them to our Metashape project. Our Workflow drop-down makes this easy, as you are able to select individual images to add to the project or an entire folder, which works great in our case.

The photos will load (185 in our case). Metashape will then ask you to choose what type of scene you want to create. For our case, Single Cameras is the proper option, as each image represents a single photo of the object.

The second option, Dynamic scene (4d) is for the reconstruction of dynamic scenes captured by a set of statically mounted synchronized cameras. For this purpose, multiple image frames captured at different time moments can be loaded for each camera location.

Now you can see that your images are loaded in the Photos pane at the bottom of your screen. This is a good time to save your project. It's suggested that you save the project in the same folder as the images, as the project and images are now linked together.

Aligning your Photos

Now that the images are loaded, it's time to actually start processing! We return to our Workflow dropdown and now notice that the Align Photos option is available.

Aligning the photos is the part of the process which takes the longest. At this stage, Metashape finds the camera position and orientation for each photo and builds a sparse point cloud model based on matching pixel groups between images. This point cloud will become the basis of our digital model.

Choosing Align Photos from the dropdown gives you a few options. The default options are generally fine, although the Accuracy setting will determine how long the process takes. In general, using the High accuracy is a good choice unless you have a very large number of photos (>200), in which case Medium may be a better choice. But really it depends on how much time you have. Aligning photos for the Flute statue with 185 photos took 20-30 min.

At this point, the processing box will appear and let you know as the software aligns your photos, starting by Selecting Points, then Selecting Pairs, Matching Points, which will take the longest, and Estimating camera locations.

Since the pixel groups used for matching are randomly selected, aligning photos several times may produce different results. If you are having difficulty getting your photos to align, try reprocessing (make sure to click Reset current alignment from the Align Photos options)

Sparse Point Cloud, the result of our photo alignment, can be seen above. Notice that the general shape of the statue is already apparent, which means that the photos were taken reasonably well. The blue squares represent the calculated locations of where the images were taken. Clicking on an image in the Photos pane will highlight the image chosen on your model, useful for troubleshooting issues. Finally, small checked boxes will appear next to the images that have successfully aligned; if a number of your images have not aligned, you will want to rerun the Align Photos process or retake your images to cover the portion of your object that is having issues. In our case, all the photos were aligned, which is fantastic!

If processing in the Digital Studio, be aware that inaction on the computers may eventually log you out and cause you to lose your work. Keep an eye on the processing, especially in the Aligning Photos stage!

Bonus: Cleaning your point cloud

You might notice from the image above that there is a lot of the surrounding environment that appears in our point cloud, particularly the concrete surface surrounding our statue. This can often take place if it is difficult or impossible for your object to take up the majority of each picture taken.

This moment allows for the opportunity to clean up the point cloud before building your mesh. There are a variety of ways to clean the point cloud, but the easiest is to use the Select and Crop tools, which works the same way as cropping an image

First, use the Select tool to roughly select the object itself. Once the area you want to keep is selected, use the Crop tool to crop everything else out. Easy!

The Delete tool seen above works in the opposite way of the crop tool. Clicking the X will delete whatever points you have selected, which offers a second way to clear out unwanted points!

Bonus: Build Dense Cloud

For objects with a lot of detail, which need a very high-resolution mesh, it may be necessary to build a Dense Cloud after aligning your photos. A dense cloud is simply what it says, a denser point cloud created from points, which align between your photos.

Choosing this option will again ask you how high you want the accuracy of your dense cloud to be. Note that the dense cloud process can take several hours depending on the number of photos taken, so be sure you have the time available before starting it.

If working to build a full 360 degree model, building a dense cloud is often necessary to merge the top and bottom of the model. This process will be covered in a future tutorial.

In the dense cloud seen above, note how some cleaning of the point cloud using the Delete and Crop tools would be useful before moving on to the creation of the mesh, as described in the section above.

Building Mesh

Now that your photos are aligned, a new option appears in our Workflow dropdown, building the mesh of our digital object. Fortunately, this process is much faster!

In short, this step takes your point cloud, which simply represents a group of points floating in space, and turns it into an actual 3D surface, a mesh, by connecting these points together.

The Build Mesh options are not too complex. It allows you to choose your point cloud (Sparse or Dense, if you made one) and pick the number of faces you want your mesh to have. The final option is Surface type, which should generally remain on the Arbitrary (3D).

The other option, Height field, is optimized for modeling planar surfaces, such as terrains or base reliefs. It should be selected for aerial photography processing as it requires a lower amount of memory and allows for larger datasets processing.

Building Texture

The last major step in processing our model is to build the texture. The texture is the colored overlay, which will sit on top of our created 3D mesh. Again, we simply return to our Workflow dropdown and select Build Texture!

The options are a bit more complex than other steps, though in general, the default options are fine. The breakdown is as follows, though in general, you are fine with the defaults:

  • Texture type: Diffuse map (Default) is the normal texture mapping, Occlusion map is used for calculating ambient lighting so is not necessary for basic models

  • Source data: will change based on the texture type. For our regular Diffuse texture, it will be the images

  • Mapping mode:

    • Generic (default): program tries to create as uniform texture as possible.

    • Adaptive orthophoto: the object surface is split into the flat part and vertical regions. The flat part of the surface is textured using the orthographic projection, while vertical regions are textured separately to maintain accurate texture representation in such regions

    • Orthophoto: the whole object surface is textured in the orthographic projection. The Orthophoto mapping mode produces an even more compact texture representation than the Adaptive orthophoto mode at the expense of texture quality in vertical regions.

    • Spherical: appropriate only to a certain class of objects that have a ball-like form. It allows for continuous texture atlas being exported for this type of object so that it is much easier to edit it later.

    • Single photo: generate texture from a single photo. The photo to be used for texturing can be selected from 'Texture from' list.

    • Keep uv: generates texture atlas using current texture parametrization. It can be used to rebuild texture atlas using different resolutions or to generate the atlas for the model parametrized in the external software.

Unsure? Just use Generic and see how it looks

  • Blending mode:

    • Mosaic (default) - implies a two-step approach: it does blending of low-frequency component for overlapping images to avoid a seamline problem (weighted average, the weight being dependent on a number of parameters including proximity of the pixel in question to the center of the image), while high-frequency component, that is in charge of picture details, is taken from a single image - the one that presents a good resolution for the area of interest while the camera view is almost along the normal to the reconstructed surface in that point.

    • Average - uses the weighted average value of all pixels from individual photos, the weight being dependent on the same parameters that are considered for a high-frequency component in mosaic mode.

    • Max Intensity - the photo which has the maximum intensity of the corresponding pixel is selected

    • Min Intensity - the photo which has the minimum intensity of the corresponding pixel is selected.

    • Disabled - the photo to take the color value for the pixel from is chosen like the one for the high-frequency component in mosaic mode.

Unsure? Go with the default Mosaic first; if you are getting weird blendings of color, try Average instead

  • Texture size/count: Specifies the size (width & height) of the texture atlas in pixels and determines the number of files for texture to be exported to. Exporting texture to several files allows to archive greater resolution of the final model texture. Again, the default is probably fine for beginning models

Bonus: Scaling your model

In some cases, you will want your model to be "to scale" so that it can easily be used in other immersive technologies (gaming, AR, VR). This process is quite easy in Metashape, as long as you have taken a few distinct measurements of various portions of the object you are modeling.

In some cases, high-tech cameras or drones that incorporate GPS technology into their workflow may automatically provide spatial information for scaling your model. No need to worry about the manual process performed here in that case!

In order to scale your model, you need to pick precise points which you are measuring between. This can be done either from the model itself or from individual pictures. Simply right click and select Add Marker

Each marker will have a name (point 1, point 2, etc). To tell Metashape the distance between these two points, you need to swap from the Workspace pane that you have been using so far to the Reference Pane. This can be done at the bottom left of your Metashape window.

Your reference pane will look something like this. Notice that it already contains the two markers we just created (point 1 and point 2). The images listed at the top will contain spatial information for your photos if your camera automatically includes GPS data.

To add the length between your two points, select both points in the reference pane, right click, and choose Create Scale Bar. This will add a scale bar to the bottom portion of the pane, where you can then type in the distance you measured between the points.

Note that the measurements are in meters, so be sure to convert appropriately!

Once you have added your scalebars, it's time to see how much error your model has! Simple go up to the reference toolbar (right above the list of images in the reference toolbar, and click the Update Transform button. Now, taking your measurements into account, the software will scale your model and tell you how much error it has.

I only estimated the measurements above, and you can see my error is pretty bad (18 cm)! In general, it is possible to have subcentimer errors, though this depends on the size of the model you are making. The larger the model, the larger error you should expect.

Exporting your Model

Now that your model is complete for now (scaled or not), you'll want to share it!

Metashape offers a variety of ways to share your model by going to File --> Export --> Export Model. Here are a few of the most common:

  • Wavefront OBJ (.obj): One of the most commonly used 3D mesh file types, it is used for sharing models using 3D software such as Meshlab and Cloudcompare. If you want to share your models on the online 3D model presentation platform Sketchfab, definitely export in this format for uploading.

Check out the beginning of this tutorial for information on how to upload your model into Sketchfab

  • 3D pdf (.pdf): A pdf, but in 3D! Anyone using Adobe Reader will be able to view your model straight from their computer. A good choice for mass distribution when others might not have the technical skills to open an obj.

  • 3D printing (.stl): Want to prepare your model for 3D printing? .STL is the file type used by most 3D printers (check out the BC 3D printing page here).

That's it for this introduction to processing 3D models! A future tutorial will go through the process of creating full 360 degree models from photos using chunks but for now we leave you with this example model! Meanwhile, check out the Metashape documentation for more information!

Want to see more models made at Boston College? Check out our Sketchfab Collection!

Last updated