Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
A series of workshop materials and tutorials based on subjects surrounding 3D modeling, AR, and VR
Below are several tutorials focused on different aspects of 3D modeling, AR, and VR project creation. We will be expanding on these in the coming months, so if there is a particular topic you are interested in feel free to contact the BC Digital Scholarship Team at digitalscholarship@bc.edu!
Reality Composer (RC) for iOS, iPadOS, and macOS makes it easy to build, test, tune, and simulate AR experiences for iPhone or iPad. With live linking, you can rapidly move between Mac and iPhone or Mac and iPad to create stunning AR experiences, then export them to AR Quick Look or integrate them into apps with Xcode. This tutorial will introduce the basic functionality of the app and review the interface.
Please note that screenshots for this introduction were created using Reality Composer on a MacBook. The screen layout is slightly different when working directly on an iPad, but the steps are the same.
Reality Composer is free to download from the app store for iPad or iPhone, or downloaded as part of the free XCode for Mac iOS (Apple Account Required). Be aware that XCode is quite a larger program for developing applications, so may not be feasible for older MacBooks or those without much storage space. Running Reality Composer on a MacBook also requires you to send your project to an appropriate iPad/iPhone for testing (see below) or open it in an emulator using xCode.
Note that Reality Composer projects (i.e. what opens in Reality Composer) are saved as .rcprojects while sharable experiences that open directly onto an iPhone or iPad with the Quick Look tool are .reality files.
When you first open up a new Reality Composer project, your first decision to make is what kind of anchor you want for the project. The Anchor type will determine what requirements are needed to open your scene in AR.
There are 5 realtively-straightforward anchor types to chose from in Reality Composer:
Horizontal: will look for any horizontal surface (e.g. a table or the floor) to open your experience on
Vertical: will look for any vertical surface (e.g. a wall) to open your experience
Image: will look for a specific defined image that you determine (e.g. a painting or business card) to open your experience around
Face: will look for a person's face (as recognized by Apple's camera) to open your experience around
Object: will look for a previously scanned physical object (see details at the end of this workshop) to open your experience around
Once you have chosen an anchor, the main window of Reality Composer will open, which changes slightly based on the anchor type you have chosen (pictured above is the horizontal anchor view ). You can also change your anchor in the right window Properties pane (as well as change the overall physics of the scene, e.g. to allow things to fall through the floor. This can also be set object-by-object later).
The window opens with two objects already present: a cube and a little placard showing some text. Before checking those out, take moment to familiarize ourselves with the other options on the main toolbar. The most important ones for us will be the Add, the Behaviors, and the Properties buttons, but we can quickly review them all.
The Scenes option opens up a sidebar that allows you to name the "scene," or AR view, you are currently working in, as well as create new scenes that can be linked by certain actions (see below).
The Frame button just lets you navigate more easily around your scene by allowing you to zoom in on certain objects or on the scene as a whole.
The Snap tool will snap objects to other objects or to certain horizontal/vertical lines within the scene (similar to the Photoshop snap tool).
The Modify tool allows you to swap between adjusting an object's position/rotation and adjusting its length/width/size (this can also be done in the properties pane, as we will see).
The Space button swaps between "global" and "object-oriented" coordinates, allowing you to move an object along various axes within your scene.
The Add button adds more models to your scene, of lots of different types.
The Play button (also AR button when working on an iPad/iPhone) allows you to test our your scene on a device.
The Send to button (Mac only) allows you to send your scene directly to a linked iPad for testing.
The Info button checks for updates to your models/scenes.
The Behavior button allows you to assign different behaviors to your object based on specific prompts (e..g. if the object is tapped, it flies in the air).
The Properties button allows you to edit the properties of a specific model or of the scene as a whole.
Exploring and playing around with objects is a good way to learn, starting with what comes in our default scene.
Clicking on the cube will automatically open the Properties pane, allowing you to directly edit its various properties like width, height, color, etc. You can also name the object, transform it in space (if you have exact specs you want), and decide if it will obey the laws of physics (e.g. fall to the surface if it appears up in the air). You can also edit directly in the viewing window once an object is selected: arrows pointing along axes allow you to move an object, while clicking the tip of an arrow will allow you to rotate an object around that axis. Give it a try!
Clicking on the placard will open a similar pane, though this one also allows you to edit the text appearing on the sign. Each object you add will have its own individual properties that you can edit.
If you want to add a new object, just click the + Add button. You have the option of many pre-installed 3D objects to work with, as well as signs that can hold text and "frames" which can hold attached images. You can also introduce your own objects by following our tutorial on the subject.
I can drop a plane into our scene by clicking and dragging it or by double clicking.
You can make your scenes dynamic with actions and reactions by adding behaviors to your objects. I'm going to select the plane, and then click the Behaviors button on the toolbar, and a new pane will open up along the bottom of the page
Clicking the + button will open the new behavior pane, where there are several pre-set behaviors you can choose from; if you scroll down to the bottom, you can create a custom behavior. For now, I will choose Tap and Add Force so we can give our plane some movement.
The behavior pane has its first behavior! You can rename the behavior (currently called Behavior) in the left pane; the Trigger pane allows you to determine what action will make the behavior start (in this case, tapping), as well as which affected objects have the trigger attached (in this case, the plane, but you could add more).
Finally, the action sequence allows you to create a series of effects that happen, either simultaneously or one after another, when the behavior is triggered. In this case, we are going to have the plan start moving forward at a certain velocity
Our plane will move forward along one axis when triggered, but that's not really taking off. Adding a second force in an upward trajectory after a moment will make this takeoff look a bit more realistic. To add further actions to the sequence, simply tap the + button next to the words Action Sequence in the Behaviors window. This will then pop up different pre-coded behaviors you can choose from.
In the image above, I added two new behaviors, a Wait Behavior and a second Add Force behavior in a 45 degree upward angle. Importantly, I directly attached the Wait behavior to the first Add Force behavior (with a drag and drop) which means that these two actions will begin simultaneously, and the second Add Force will not start until the first set is complete. This means our plane will move forward a certain amount before briefly "taking off".
Now that we have an experience, we need to test it out to see if it functions correctly. There are a few ways to do this:
If you are working on an iPad or iPhone, it's easy. Just tap the AR button on the top toolbar to open the experience in AR, and then the Play button to start the experience from the beginning.
If you are working on a Mac, it's a bit more difficult. On the one hand, if you hit the Play button on the top toolbar, the experience will start, but will obviously not be in AR, making testing a little bit difficult (though you could still test the functionality in the building screen, as pictured below).
There are other options for testing from a Mac, however. If you have an iPhone or iPad handy that has Reality Composer installed, you could connect it via a USB/USB-C cable to your computer. If you then open Reality Composer on both devices and hit the Send To button on your Mac, the file will open in Reality Composer and be editable/testable on your iPad!
Note that if you are using specially imported models, they may not be available on your second device, unless they have been imported there as well.
Another option is to export the file and share it as a .reality file. To do this, go to File --> Export, and pick either to export the entire project or just the current scene. After saving it on your computer, you can navigate to that folder, select the .reality file, again go to File --> Share in the Finder menu, and choose how you want to share the file (text, AirDrop, etc) to your iPad or iPhone. Opening it on your iPhone or iPad in this way does not require Reality Composer, as it is using the built-in Apple Quick Look platform (you can also share your experience with other people in this way!).
We are going to add one last behavior that will make our action "replayable": returning the plane to its original location after a certain amount of time. Otherwise, once we tap the plane the first time, it's gone.
This can be done by adding one more action to our action sequence, a Move, Rotate, Scale To action that will move our object back home. Adding this action to the end of our action list, and then selecting the plane as our affected object, will allow you to choose where you want the plane to return to. In this case, I will adjust it so it ends up back where it started (by moving the plane in the image back to the left to the starting place). Also note that I added one more Wait action so that the plane will wait one second after it stops being impacted by the force to return home.
And that's it! Now the plane will return to its original location. Project testing videos and the files created with this tutorial can be found below. There is obviously a ton of other behaviors and models to play with, so give it a try!
As a bonus, you could use the Hide and Show behaviors to make the plane seem to magically "appear" back in its home location at a certain moment. See if you can make it work!
With the use of a couple of plugins, both Omeka S and Omeka Classic allow you to incorporate 3D models into your digital exhibits. Getting both your site and the 3D models in the proper form, however, takes a bit of work, particularly for Omeka S.
While both Omeka S and Omeka Classic display 3D models using the Universal Viewer plugin, there are a few others that are necessary to ensure the framework is properly set up.
Plugins/Modules to Install in both Omeka S and Omeka Classic
Universal Viewer - This is the actual viewer that can display both 3D models and normal images/pdfs/etc. Its design is much nicer than the standard Omeka presentation view.
Archive Repertory - Keeps original names of imported files and puts them in a hierarchical structure (collection/item/files) in order to get readable URLs for files and to avoid overloading the file server.
Additional Modules for Omeka S
IIIF Server: Integrates the IIIF specifications to allow you to share instantly images, audio, and video.
Image Server: A simple and IIIF-compliant image server to process and share instantly images of any size in the desired formats.
As of the writing of these instructions, installing the Image Server module required the ability to directly access and install the module on the server, rather than copying the filer over using the FTP/SSH and installing from the Omeka Module page.
To display 3D models, we use the Universal Viewer plugin, which uses the ThreeJS library to display the models. This means that the model needs to be in a .json format to upload the file to Omeka. There are a variety of ways to do this, but an easy method is to use the ThreeJS editor.
Open the ThreeJS Editor Link.
Upload your 3D model (OBJ, gltf, etc.) by going to File --> Import and navigating to your 3D file.
The mesh (geometry) of your model will appear. The next step is to attach the texture (color overlay) to the model, if it is not already integrated into your file. To do this, click the "+" next to the name of your file on the top of the right sidebar and select the material associated with it (generally "material_0"). Then choose the "Material" tab below.
From here, where it says "Type," select "MESHBASICMATERIAL," which should be at the top of the list of choices. Next, click the dark rectangle next to "Map" a bit further down the sidebar, and navigate to your texture (usually a .jpg, if you are uploading an obj). Finally, check the small box next to "Map" and the texture should appear.
Your model should look good now! The last step in the editor is to go to File--> Export Scene, and the model should be exported as a .json to your downloads folder.
Your model is now in the correct format (.json) to be uploaded to Omeka. In each case, you will want to be sure that JSON extension and the application/JSON file type are allowed (in Omeka Classic, go to Settings--> Security from the top of the Admin dashboard; in Omeka S, go to Settings in the Admin sidebar and scroll down to the Security section).
Now just create a new Item as you would any other item in Omeka, with the .json file as your main file upload! Note that if you want a proper thumbnail in Omeka Classic, you should upload two files for your item, first a normal jpg of the thumbnail you want to use, and then the .json file of your 3D model. In Omeka S, there is an option when creating your item for adding a thumbnail image if you wish.
When creating 3D models from photographs, taking the photos is more than half the battle! Here are some tips and tricks to make your processing a breeze.
Having the right equipment will greatly speed up your processing time. Fortunately, in today's world nearly any camera is good enough to take your photos. A few things to keep in mind:
Use a camera with at least 5 MPix resolution
Avoid ultra-wide angle and fish-eye lenses if possible (it is possible to compensate for these effects, but better to not worry about it)
Fixed lenses are preferred, if possible, but using a zoom lens is fine as long as you stick with one focal length (that is, don't change the zoom levels during the photo-taking process)
If you are making models of objects, some bonus equipment may prove helpful, even if it is not required:
A lightbox may be used to spread even, diffuse light across the object being modeled, helping to avoid the effects of shadows (see Environment below)
A turntable will rotate an object so you can get pictures of all sides, in case you don't have the room to move around the object. The lightbox available in the digital studio has marks at around 10-degree intervals, allowing you to rotate evenly around the object and know where you began and ended.
A tripod keeps your camera steady at various angles, and will make the process of using a turntable smoother.
Each of these items is available in the O'Neill Digital Studio for you to reserve in the space.
The environment in which you take the photos is almost as important as the equipment you use. Here are a few things to think about, whether you are taking photos of a fixed feature outdoors or in a controlled, indoor environment:
Shadows are bad; even, diffuse light is good. If you are working outdoors, a cloudy day is best, though this is obviously not always possible. If you are working inside, using a lightbox (see above) can help to avoid unwanted shadows
A consistent background color can be helpful, particularly if you end up needing to mask your photos during processing. Again, a lightbox can help with this. Otherwise, it is good practice to position your object in front of a consistent and contrasting backgroun.
Consistency is key, whether indoors or out. You don't want any objects or people moving around in the background of your images. This can confuse the processing software and make it more difficult to align your photos.
For successful 3D modeling of objects with photogrammetry, overlapping images are key, as the object itself is reconstructed through matching pixel groups across photos. Indeed, you ultimately want images of each face of an object from multiple angles in order to make your processing as simple as possible.
2 important notes!
(1) DO NOT ZOOM during the photo-taking process; stick with one focal length and, instead of zooming, move your body forward or backward as needed.
(2) In order to get the best model, try to fill up as much of the camera frame as possible with the object. You are making a model of the object after all, not the rest of the world.
The images below suggest a basic process for taking photos of a small object. It combines two factors for each photo, the angle at which the photo is taken and the rotation of the turntable on which the object is sitting. It is recommended to take photos at multiple angles of each side of the object, rotate the object ~10 degrees, and repeat the process until you have taken photos of the entire object.
If using a tripod, it may be easier to take all your photos at one angle and then rotate again at the second angle after adjusting the tripod.
Fewer photos may be required for simple objects, but we do recommend at least 30 photos for any photo-model to ensure that there is enough overlap for a full model to process
Check out the Digital Scholarship Sketchfab collection for the final results of the BC eagle model, created using ~100 photos (around 36 photos at 3 angles).
In order to put your own 3D models into Apple's AR QuickView, they must be converted to a .usdz file from the more traditional .obj file. Two methods are outlined below:
the model you want to put into AR
to download Reality Composer, free from the app store for iPad or iPhone, or as part of the free XCode for Mac iOS on a Mac laptop/iMac (Apple Account Required)
either a Sketchfab account or to download Reality Converter (Mac iOS only, Apple Account Required)
Do you want to play around with converting models but don't have any of your own? Feel free to use our example model below or download one from Sketchfab or from our Digital Scholarship Collection (see Sketchfab instructions below)
For simple projects, using a combination of Sketchfab and Reality Composer on an iPad is recommended.
Whether you have created an original 3D digital object or generated a model from a real-world subject, many times the software you are using does not allow for a direct export to .usdz, the file format required by Apple for use in its AR toolkit. While there are many plugins and tools available that can make the conversion from different platforms, here we discuss two simple workflows using Apple's Reality Converter and the online 3D repository Sketchfab.
Sketchfab makes it very easy to convert your model; in fact, it is automatically converted when you upload the model to the site!
Note that Sketchfab is free for anyone willing to make their models publicly available to download with a Creative Commons license. If you wish to charge for downloads or keep them private, you must create a paid account.
Go to sketchfab.com and click the Sign Up button in the top-right of the page. Choose your username and confirm with an email and password.
Once signed in, click the orange Upload button in the top-right. This will bring up a page that lets you drag and drop a wide variety of files for upload. If you are uploading an .obj that includes a texture, the easiest way is to zip the .obj, .mtl, and .jpg files associated with the model together, and drop the entire zip file into the uploader.
Click Upload Files and wait. Depending on how large your model is, it will take some time for the processing to complete. While you wait, you can add a variety of metadata to make your model more findable, including placing it in categories and tagging it with keywords. At this point, it is also necessary to click Free under the Download section on the right side of the page, unless you have an upgraded account on the site.
Once the processing is done, the orange Publish button in the bottom-right corner will become available. Click it, and your model is now published!
Once published, you will be able to access your model's Object Page, which allows you and others to manipulate your model, see options for embedding it in another site, and see other model information. This is also where you can download your model as a .usdz. Just below your username beneath the model is a Download 3D Model button; click the button, and a variety of download types are available, including Augmented Reality Format (USDZ). Hit download and you're done!
Note: it can take a few minutes even after publication for your model to be available to download. If you select Edit Properties in the top right, you can both edit your metadata and, if your model is not yet ready for download, a yellow box will inform you that the model is still being prepared
Reality Converter is Apple's 3D conversion tool for creating .usdz files. Its process is quite easy as well and allows your models to remain private (if that is important to your project).
Download Reality Converter to your Mac (Apple ID required for download) or from the App Store to your iPad/iPhone.
Once downloaded and installed, open the program on your device.
As with Sketchfab, Reality Converter accepts a variety of files and opens with a Drop files here screen. Drop your .obj file to be converted here.
Your 3D model will convert and should appear in the window, yet it will probably only be a view of your mesh with no texture. To add the texture, select Materials in the top-right, click Base Color and then navigate to the texture file (usually a .jpg) associated with your .obj.
Your model should be looking better now! To save your file as a .usdz, simply go to File --> Export and choose where to save your converted model
Reality Composer is Apple's no-coding-required platform for creating basic AR experiences. Here we will talk about how to open your model using this app and create a simple experience that allows your model to appear in AR and when tapped.
Download Reality Composer, free from the app store for iPad or iPhone, or downloaded as part of the free XCode for Mac iOS (Apple Account Required)
After installing and opening Reality Composer, the first screen asks you to choose an anchor for your model. We won't go into all the details here (see the full guide for more), so simply choose a horizontal anchor so it will act as if your model is sitting on the table or floor.
A grid surface will appear with a little square box, along with a small placard sitting in front of it. Feel free to delete this sample object (or keep the placard if you want; if you click on it and scroll down in the properties pane on the right you can change its text to fit your model). To add your converted model, click the + sign in the middle of the top toolbar and click the Import button (Mac) or the large plus sign beneath the word import (iPad/iPhone). Navigate to your converted .usdz file and select it.
Your model should appear in the window! At this point, you may need to rotate, scale, or resize your model. Clicking on the model will open up the Properties pane, which allows you to change the scale. It also creates x-, y-, and z-axes around your object, allowing you to move it around your window. Clicking on the arrow header of each axis allows you to rotate the object along that axis. For now, simply make sure your object is rotated so that it makes sense for viewing.
Note you can also change the location and rotation of your object in the Properties pane. For more information on basic tools in Reality Composer, see the full tutorial.
Bonus: want your object to move about when viewed? Select the object and click the Behaviors button on the toolbar (looks like an exploding arrow pointing to the right). Click the + sign to add a new behavior and click Tap and Flip to automatically set up this behavior. See more about behaviors in the full tutorial!
In some cases with using models of real-world objects, the spatial "center" of a model may not line up with the object's center, and this may continue through the .usdz process. This is noticeable when, for example, you try to spin your object around its central axis. If this is the case, using Sketchfab to convert your file should reset the central axis to the center of the 3D object
Now you are ready to export and share your model in AR!
It's very easy to share your model from your iPhone or iPad. Simply click the ... button (triple dots) in the top-right corner of the toolbar and select Export. At this point, you can choose to share an entire project or just this scene. As we only have one scene right now, select Current Scene and then Export. Depending on what apps you have installed on your device, you can now share via Messages, Gmail, AirDrop, or a variety of other methods
On a Mac, go to File --> Export and again you have the option for Current Scene or whole project. Choose where to save, and it will save a .reality file to that location, which can then be shared in whatever way you wish. Reality files can be opened directly by iOs devices in AR or be imported into Reality Composer by whomever you share it with to be used in their own projects.
Agisoft Metashape (available in the Digital Studio) is a stand-alone software product that performs photogrammetric processing of digital images and generates 3D spatial data (point clouds, meshes, etc.) to be used in GIS applications, cultural heritage documentation, and visual effects production, as well as for indirect measurements of objects of various scales. Its user-friendly workflow sets it above many of the other 3D model-processing platforms.
To create a 3D model from photographs takes only a few easy steps, which are nicely summarized for you in Metashape's "Workflow" dropdown. It even grays out options that are not available to you yet!
In this introduction, we will go step-by-step through the creation of a simple 3D model, in this case using photos taken of the Doug Flutie "Hail Flutie" statue taken outside of Boston College's Alumni Stadium in Fall 2020 (evident from the mask on Doug's face). This model will be "in-the-round," as it were, but not a full 360 degrees, as the statue is fixed in the ground.
These steps are:
Adding your photos
Aligning your photos/building a dense cloud (if necessary)
Building your mesh
Building your texture
Making your model to Scale
Exporting your model
Let's get started!
Once you are finished taking photos of an object with your camera or camera phone, you will want to copy the images off of your device and place them in a recognizable location on your computer.
For this example, I have created a folder called "Flutie" and placed it on my desktop. Inside of it are all the photos I took of the statue. For tips on taking photos of objects either inside the Digital Studio lightbox or "in the wild," check out our page on Tips and Tricks for Taking Photos for 3D Model Creation or our BC Digital Scholarship Workshop video.
Now that the photos have been offloaded to the computer, it is time to add them to our Metashape project. Our Workflow drop-down makes this easy, as you are able to select individual images to add to the project or an entire folder, which works great in our case.
The photos will load (185 in our case). Metashape will then ask you to choose what type of scene you want to create. For our case, Single Cameras is the proper option, as each image represents a single photo of the object.
The second option, Dynamic scene (4d) is for the reconstruction of dynamic scenes captured by a set of statically mounted synchronized cameras. For this purpose, multiple image frames captured at different time moments can be loaded for each camera location.
Now you can see that your images are loaded in the Photos pane at the bottom of your screen. This is a good time to save your project. It's suggested that you save the project in the same folder as the images, as the project and images are now linked together.
Now that the images are loaded, it's time to actually start processing! We return to our Workflow dropdown and now notice that the Align Photos option is available.
Aligning the photos is the part of the process which takes the longest. At this stage, Metashape finds the camera position and orientation for each photo and builds a sparse point cloud model based on matching pixel groups between images. This point cloud will become the basis of our digital model.
Choosing Align Photos from the dropdown gives you a few options. The default options are generally fine, although the Accuracy setting will determine how long the process takes. In general, using the High accuracy is a good choice unless you have a very large number of photos (>200), in which case Medium may be a better choice. But really it depends on how much time you have. Aligning photos for the Flute statue with 185 photos took 20-30 min.
At this point, the processing box will appear and let you know as the software aligns your photos, starting by Selecting Points, then Selecting Pairs, Matching Points, which will take the longest, and Estimating camera locations.
Since the pixel groups used for matching are randomly selected, aligning photos several times may produce different results. If you are having difficulty getting your photos to align, try reprocessing (make sure to click Reset current alignment from the Align Photos options)
Sparse Point Cloud, the result of our photo alignment, can be seen above. Notice that the general shape of the statue is already apparent, which means that the photos were taken reasonably well. The blue squares represent the calculated locations of where the images were taken. Clicking on an image in the Photos pane will highlight the image chosen on your model, useful for troubleshooting issues. Finally, small checked boxes will appear next to the images that have successfully aligned; if a number of your images have not aligned, you will want to rerun the Align Photos process or retake your images to cover the portion of your object that is having issues. In our case, all the photos were aligned, which is fantastic!
If processing in the Digital Studio, be aware that inaction on the computers may eventually log you out and cause you to lose your work. Keep an eye on the processing, especially in the Aligning Photos stage!
You might notice from the image above that there is a lot of the surrounding environment that appears in our point cloud, particularly the concrete surface surrounding our statue. This can often take place if it is difficult or impossible for your object to take up the majority of each picture taken.
This moment allows for the opportunity to clean up the point cloud before building your mesh. There are a variety of ways to clean the point cloud, but the easiest is to use the Select and Crop tools, which works the same way as cropping an image
First, use the Select tool to roughly select the object itself. Once the area you want to keep is selected, use the Crop tool to crop everything else out. Easy!
The Delete tool seen above works in the opposite way of the crop tool. Clicking the X will delete whatever points you have selected, which offers a second way to clear out unwanted points!
For objects with a lot of detail, which need a very high-resolution mesh, it may be necessary to build a Dense Cloud after aligning your photos. A dense cloud is simply what it says, a denser point cloud created from points, which align between your photos.
Choosing this option will again ask you how high you want the accuracy of your dense cloud to be. Note that the dense cloud process can take several hours depending on the number of photos taken, so be sure you have the time available before starting it.
If working to build a full 360 degree model, building a dense cloud is often necessary to merge the top and bottom of the model. This process will be covered in a future tutorial.
In the dense cloud seen above, note how some cleaning of the point cloud using the Delete and Crop tools would be useful before moving on to the creation of the mesh, as described in the section above.
Now that your photos are aligned, a new option appears in our Workflow dropdown, building the mesh of our digital object. Fortunately, this process is much faster!
In short, this step takes your point cloud, which simply represents a group of points floating in space, and turns it into an actual 3D surface, a mesh, by connecting these points together.
The Build Mesh options are not too complex. It allows you to choose your point cloud (Sparse or Dense, if you made one) and pick the number of faces you want your mesh to have. The final option is Surface type, which should generally remain on the Arbitrary (3D).
The other option, Height field, is optimized for modeling planar surfaces, such as terrains or base reliefs. It should be selected for aerial photography processing as it requires a lower amount of memory and allows for larger datasets processing.
The last major step in processing our model is to build the texture. The texture is the colored overlay, which will sit on top of our created 3D mesh. Again, we simply return to our Workflow dropdown and select Build Texture!
The options are a bit more complex than other steps, though in general, the default options are fine. The breakdown is as follows, though in general, you are fine with the defaults:
Texture type: Diffuse map (Default) is the normal texture mapping, Occlusion map is used for calculating ambient lighting so is not necessary for basic models
Source data: will change based on the texture type. For our regular Diffuse texture, it will be the images
Mapping mode:
Generic (default): program tries to create as uniform texture as possible.
Adaptive orthophoto: the object surface is split into the flat part and vertical regions. The flat part of the surface is textured using the orthographic projection, while vertical regions are textured separately to maintain accurate texture representation in such regions
Orthophoto: the whole object surface is textured in the orthographic projection. The Orthophoto mapping mode produces an even more compact texture representation than the Adaptive orthophoto mode at the expense of texture quality in vertical regions.
Spherical: appropriate only to a certain class of objects that have a ball-like form. It allows for continuous texture atlas being exported for this type of object so that it is much easier to edit it later.
Single photo: generate texture from a single photo. The photo to be used for texturing can be selected from 'Texture from' list.
Keep uv: generates texture atlas using current texture parametrization. It can be used to rebuild texture atlas using different resolutions or to generate the atlas for the model parametrized in the external software.
Unsure? Just use Generic and see how it looks
Blending mode:
Mosaic (default) - implies a two-step approach: it does blending of low-frequency component for overlapping images to avoid a seamline problem (weighted average, the weight being dependent on a number of parameters including proximity of the pixel in question to the center of the image), while high-frequency component, that is in charge of picture details, is taken from a single image - the one that presents a good resolution for the area of interest while the camera view is almost along the normal to the reconstructed surface in that point.
Average - uses the weighted average value of all pixels from individual photos, the weight being dependent on the same parameters that are considered for a high-frequency component in mosaic mode.
Max Intensity - the photo which has the maximum intensity of the corresponding pixel is selected
Min Intensity - the photo which has the minimum intensity of the corresponding pixel is selected.
Disabled - the photo to take the color value for the pixel from is chosen like the one for the high-frequency component in mosaic mode.
Unsure? Go with the default Mosaic first; if you are getting weird blendings of color, try Average instead
Texture size/count: Specifies the size (width & height) of the texture atlas in pixels and determines the number of files for texture to be exported to. Exporting texture to several files allows to archive greater resolution of the final model texture. Again, the default is probably fine for beginning models
In some cases, you will want your model to be "to scale" so that it can easily be used in other immersive technologies (gaming, AR, VR). This process is quite easy in Metashape, as long as you have taken a few distinct measurements of various portions of the object you are modeling.
In some cases, high-tech cameras or drones that incorporate GPS technology into their workflow may automatically provide spatial information for scaling your model. No need to worry about the manual process performed here in that case!
In order to scale your model, you need to pick precise points which you are measuring between. This can be done either from the model itself or from individual pictures. Simply right click and select Add Marker
Each marker will have a name (point 1, point 2, etc). To tell Metashape the distance between these two points, you need to swap from the Workspace pane that you have been using so far to the Reference Pane. This can be done at the bottom left of your Metashape window.
Your reference pane will look something like this. Notice that it already contains the two markers we just created (point 1 and point 2). The images listed at the top will contain spatial information for your photos if your camera automatically includes GPS data.
To add the length between your two points, select both points in the reference pane, right click, and choose Create Scale Bar. This will add a scale bar to the bottom portion of the pane, where you can then type in the distance you measured between the points.
Note that the measurements are in meters, so be sure to convert appropriately!
Once you have added your scalebars, it's time to see how much error your model has! Simple go up to the reference toolbar (right above the list of images in the reference toolbar, and click the Update Transform button. Now, taking your measurements into account, the software will scale your model and tell you how much error it has.
I only estimated the measurements above, and you can see my error is pretty bad (18 cm)! In general, it is possible to have subcentimer errors, though this depends on the size of the model you are making. The larger the model, the larger error you should expect.
Now that your model is complete for now (scaled or not), you'll want to share it!
Metashape offers a variety of ways to share your model by going to File --> Export --> Export Model. Here are a few of the most common:
Wavefront OBJ (.obj): One of the most commonly used 3D mesh file types, it is used for sharing models using 3D software such as Meshlab and Cloudcompare. If you want to share your models on the online 3D model presentation platform Sketchfab, definitely export in this format for uploading.
Check out the beginning of this tutorial for information on how to upload your model into Sketchfab
3D pdf (.pdf): A pdf, but in 3D! Anyone using Adobe Reader will be able to view your model straight from their computer. A good choice for mass distribution when others might not have the technical skills to open an obj.
3D printing (.stl): Want to prepare your model for 3D printing? .STL is the file type used by most 3D printers (check out the BC 3D printing page here).
That's it for this introduction to processing 3D models! A future tutorial will go through the process of creating full 360 degree models from photos using chunks but for now we leave you with this example model! Meanwhile, check out the Metashape documentation for more information!
Want to see more models made at Boston College? Check out our Sketchfab Collection!
Blender is a free and open source 3D creation suite. It supports the entirety of the 3D pipeline—modeling, rigging, animation, simulation, rendering, video editing, and animation.
In this tutorial, we will be using 3 programs, all of which are free to download:
(available on PC, Mac, or Linux)
(free on App Store or with Apple ID for MacBook)
(free on App Store with Apple ID for MacBook)
When starting a new Blender project, this is the first screen you seen. While it may look daunting, we will only be using a few tools in this tutorial to create new objects and perform minor edits on them.
The vertical toolbar on the left side of the screen is where we'll find most of the options we need for object manipulation.
This toolbar is very similar to what you might see in Photoshop. When an object is selected, these buttons let you edit its basic properties in the viewer window. Most important for our purposes are the Move, Rotate, Scale, and Transform buttons (3rd-6th buttons counting down, in the group of four tools). These let you edit your object in the standard ways you would expect.
A slightly more advanced tool that may be useful is the Extrude tool, which allows you to take a single face of your object and pull it outward. To access this tool, you must first change the mode you are in from Object Mode to Edit Mode from the drop down just above the manipulation tools.
Now, extruding our default cube once we are in edit mode takes just a few steps. First, just to the right of Edit Mode on the toolbar are 3 options that let us choose whether we want to select points, edges, or faces. The third option allows us to choose a face to extrude.
Now pick which face of our default square we want to pull outward:
Finally, one of the choices on the expanded Object Manipulation toolbar on the left is called Extrude Region. Choosing this tool will let you pull outward, expanding your object in one direction.
The toolbar on the right displays both the collection of objects within our current scene and the properties of that object.
The top of the pane lists all the objects in the current scene and allows you to select them or make them visible/invisible (again, very similar to Photoshop). The lower window displays the object properties, which can be edited directly here. This includes the manipulation properties which you can also edit in the viewing frame itself. There are many other properties, but we don't need to worry about them for now.
Blender comes with a number of pre-created basic objects that you can then manipulate to create what you want. To add a new object, just go to Add on the toolbar next to Object Mode.
Once you have chosen your mesh, you can manipulate it or extrude it as needed to create the object you want.
Apple's Reality Composer program allows you to import .usdz 3D files. Unfortunately, Blender doesn't let you automatically export files in this format. I recommend exporting your object as a .glb file, as this will let you keep the texture, then you have two options for a quick conversion:
Note that you cannot change the color/texture of imported objects once they are in Reality Composer. As such, you should color/texture your objects in Blender and export as a .glb to keep the material attached. This can be done in the "material" options under the properties of a specific object.
Meshlab is a powerful open source tool for processing and editing 3D meshes. It provides a set of tools for editing, cleaning, healing, inspecting, rendering, texturing, and converting these 3D models. This very short introduction covers downloading the tool, importing an .obj 3D file downloaded from SketchFab, and taking basic measurements.
As an open-source tool, Meshlab is easy to download and install in just a few simple steps!
Head to the .
Choose your operating system (Win/Mac/Linux) and click to download.
Navigate to the download folder on your computer and double-click to start the installation.
Sketchfab is a hosting platform for 3D models. Many of these models are free to download and are listed under a Creative Commons license. Here are a few simple steps to download a model from the site:
A few choices of download types will appear: (a) the original format the file was uploaded in (in this case an "obj" file); (b) gITF, which is useful for web-viewing frameworks; and (c) UZDZ which is for alternate reality (AR) frameworks. For opening in Meshlab, the original format (.obj) is a good choice. Click "Download" and a zip file containing the .obj will download.
Once downloaded to your computer, unzip the file. Once unzipped, you will need to go into the unzipped folder (skull-a), open the folder called "source", and then unzip a second folder called "SkullA" which contains three files, including our .obj. Now you are ready to import the obj into Meshlab.
Once Meshlab is installed and you have downloaded and unzipped the model you want to open, start up the program. Your opening screen should look like this:
Once open, importing a model is easy. Go to File --> Import Mesh and navigate to the location where you unzipped your .obj file (in my case, the file location seen below). Select "SkullA_scaled.obj" and the file will open in Meshlab. It should look something like the screenshot below.
Meshlab is a powerful tool, but we are only going to be using the very basics for now. To start, you simply need to be able to rotate, zoom, and measure the object.
To look more closely at the skull, simply click and drag to rotate the model in various directions.
To zoom in and out, use two fingers on the trackpad or the scroll wheel on the mouse.
Finally, to measure the object, click on the little measuring tape in the main toolbar, which will look something like this:
Click on endpoints of the distance you want to measure. In the case below, I measured across the eyes of a different model, Skull B, with the result of approximately .095 meters, or 9.5 centimeters! Keep in mind that the units of measurement in Meshlab are in meters, so be sure to convert if need be.
Note you can open more than one 3D model at a time in Meshlab, though if they do not have associated spatial data, they might end up on top of one another. You can turn on and off meshes using the little "eye" icon next to the mesh name in the top right corner of the screen.
Use Apple's to quickly drag-and-drop your model to convert it.
Upload your model to Sketchfab and then download it as a .usdz (see )
Go to . You will need to make an account with an email address and password to download models.
While many models are free to download, others require you to pay. Boston College Digital Scholarship has from around BC that are free to download once you have an account.
For this tutorial, we will be using from a biology class at BC, located within the Boston College Digital Scholarship Collection. To download the 3D files, simply open the page containing the model and click "Download 3D model" beneath the name of the model. All downloadable models will have this button.