Blender Photogrammetry Tutorial

If you’ve been interested in Blender Photogrammetry but have been unsure of where to start, you’ve just found your starting point. This is the most comprehensive tutorial you’ll find on creating amazing photogrammetry in Blender, and of course, it’s all for free. The only thing you really need is a camera, your smartphone will most likely be enough to get started.

What is photogrammetry?

Chances are you already know what it is, but in case you stumbled upon this tutorial without searching for it, here’s an explanation.

Basically, photogrammetry (also sometimes called 3D photo scanning) is the science of measuring things with images. You take images from various different angles and by using smart software you will produce a 3D model of a real-world object.

What do I need to get started?

To start your journey of photo scanning real-life objects and recreating them in 3D, you’ll need a few things and only of them costs (chances you’ll have it already).

While having a DSLR camera will produce higher-quality models and sharper textures, you can still get very good scans with just your phone. In fact, I dare to say that a majority of the people doing photogrammetry with Blender do not need a DSLR to get good results.

If you own an iPhone 12 or above, there’s actually built-in LiDAR. This lets you skip the tedious process of taking photos manually, instead, you just point the camera towards the object and walk around it, almost like you were filming it.

The limitations of photogrammetry

Before you go out and start taking photos of everything you see, there are some things to keep in mind. Photogrammetry is amazing in many ways, but it’s not magic and there are quite a few limitations on what will be possible to scan.

Reflective & transparent surfaces

Everything that is reflective is close to impossible to get a good scan off. Maybe in the near future, it will be possible, but for now, stay away from surfaces that are reflective or transparent. This is because the software cannot tell the difference between a reflection and a non-reflection.

This honey coated glass jar would be a real challenge to recreate.

Foliage

If you were planning on scanning trees, grass, or bushes, I got some bad news for you. Foliage doesn’t go well with photogrammetry and will 9.9/10 times produce a blobby mess(h). If you’ve ever explored Google Maps 3D view and zoomed into a foresty area you’ll know what I mean. You could get it looking decent in a render if you keep it far away and get a good lightning and angle setup.

Good luck getting all those leaves and twigs covered from all angles!

If you want to bring foliage into 3D, it’s better to pick a few leaves, sticks, and some grass straws and lay them flat on a white background, and take a photo. From there you can create an alpha card and use that for your trees and ground.

Lightning

You just woke up, the sun is shining, it’s warm outside and the birds are singing. It’s a great day for some photogrammetry you think to yourself, except it isn’t. Having the sun out means there will be harsh shadows everywhere. It’s hard for photogrammetry software to differentiate between what’s a dark shadow and what’s not, it might interpret the shadow as geometry, for example, producing a model that’s not physically accurate. And even if it does recognize it’s a shadow, you will have a model with permanently baked shadows.

It’s a perfect day for everything but photogrammetry.

There are ways to de-shadow an image, like raising the shadow brightness in Lightroom/Camera RAW, but that will also raise the noise in those areas. You could mask/paint out the shadows manually too, but that’s a lot of work.

The best way to avoid all this extra work is to take your photos on a cloudy day. The clouds act as a diffusion filter and will create soft even shadows (much like ambient occlusion), making the deshadowing process a lot simpler and quicker.

Complex shapes & forms

You can only scan what your camera sees. Unfortunately, there’s no way for photogrammetry software to guess what’s on the other side of an object. This means that if you plan to scan something complex that has a lot of weird shapes or if the object is so big that you cannot scan it from the top, you will get some weird mesh artifacts. The only way to fix this is by creating that additional geometry in Blender yourself.

A nightmare scenario for photogrammetry, not impossible, just very very hard. Sometimes you’re better off recreating it in 3D directly.

How you take the photos

Last but not least, the final limitation of photogrammetry is the actual process of taking photos. A thought that has probably already come to your mind is to place your desired object on a lazy Susan (also known as a turntable) and just take a bunch of stationary photos when it’s spinning. This method doesn’t work* because the way photogrammetry software calculates the distance of an object is by using both the object and the surrounding environment. If you keep your camera stationary the software will struggle to get accurate data and will most likely leave you with a bad mesh.

*This is actually possible, but it requires tons of extra work and isn’t exactly beginner-friendly, hence why it won’t be included in this guide.

Let’s begin

Since this is a beginner’s tutorial on photogrammetry with Blender I recommend you to start with something simple, like a rock. Rocks are easy to get into 3D because they can easily be photographed from every angle. Their organic shape also helps if your mesh isn’t perfect, it helps to hide the imperfections. So let’s head out and find a decently sized rock.

For the sake of the tutorial, I will be doing photogrammetry of an old tree that I found on a walk a few months ago. I had my DSLR with me this time, so the photos are high quality but remember that is not a necessity.

Photographing for photogrammetry

Now that you’ve found your rock of choice, it’s time to get your smartphone (or DSLR) out and start grabbing photos. Since we will be using the photos to create the texture, it’s really important to shoot with manual settings, using auto will change the exposure for each photo and cause inconsistencies with the texture. If you do not have a manual mode on your phone it’s okay, we can use software like Lightroom or Photoshop to manually adjust the exposure for each shot. It’s not ideal, but sometimes that’s the only solution.

Start by orbiting around your rock, take a photo, take a step to the right, and take another. Repeat this until you’ve gone one lap around. Each lap should be from a different angle, we do this so we can get photos of everything. See my amazing gif below to see how I did it for the tree.

Stop motion or photogrammetry?

If you take photos like I showed above you will get a decent model, it won’t look super detailed upclose, but it’s going to look good enough in a general render. If you want more detail and sharper textures you will need to get closer and take even more photos. A general rule to keep in mind is that the more photos you take (from different angles of course), the better the final result will be. Just be wary that producing high-quality models can take time, expect at least an hour or so for the mesh to be created in Meshroom

Small touch ups

Before we import your photos into Meshroom, I recommend doing 2 things:

  1. Make sure the exposure is the same on all images (or at least very close to it)
  2. Use a deshadowing tool (Lightroom or Photoshop Camera RAW works nice)

Even if you shoot on a cloudy day there will be some shadows, doing some minor deshadowing helps us get an even better texture in the end that’s free of baked shadows. The tree below was shot on a cloudy day, but still, there are visible shadows. That’s why doing some minor deshadowing can make the final product look better. Is it worth spending even more time removing the shadows completely? I’d say no, but if you want something 110% perfect, go for it. Although I am not sure how many people will notice in the end.

The first photo is the raw photo from the camera, second is the “deshadowed” version.

Getting started with Meshroom

It’s finally time for what you all have been waiting for, creating the actual mesh. Open up Meshroom, on the left-hand side of the screen, you’ll find a tab called “Images”, drag and drop your photos here.

You can also import photos by going to File > Import Images

Since we’re not doing anything complex, we don’t need to change any of the settings just yet, so press the green button labeled “Start” at the top (make sure to save the project to a folder of your choice first). This will start the whole photogrammetry process and can take everything from a couple of dozen minutes to several hours. It will depend on your hardware, file size, and the number of photos you took.

You could always start with fewer photos and see what it gets you, let’s say you took a total of 100 photos, upload 50 and check if the result is good enough for you. The downside to this strategy is that you’ll have to wait even longer if you’re not satisfied with the first result.

There are tons of settings you can change in the graph editor, but for now, we will just keep it simple and see where it gets us. Stock settings do work fine most of the time.

Green means it’s completed, orange is the current section that’s being processed and blue is unprocessed.

Previewing the model

After some time, you will see a point cloud appear in the 3D view. This is not the finished model, but a representation of the point cloud of the model you’ve made. It will probably be upside down, so try to orientate the camera correctly to see everything better. If you are lacking points in a certain spot, that could mean there’s an issue with that area. The only way to find out is to see the full mesh.

The point cloud for my big old tree.

Exporting the mesh

When you’ve waited for a little more, you will finally see the finished textured mesh. Now there are really only two scenarios in this case: you either get a mesh and texture you are happy with or you don’t. If you’re not happy with the mesh, for example, it didn’t recreate some of the parts correctly, my only advice is to go back and take more photos. If you’re lucky and the lighting conditions are the same, you could just add these photos to the existing project, if not, just open up a new project and do it all over again.

Can’t see any mesh in the 3D viewer? Sometimes you need to double click on the node that is called “Texturing”, which will bring it into the 3D viewer for you. If the mesh looks good in the 3D viewer you can export it by clicking on the texturing node and find the mesh folder location. Paste it into your explorer and open the model up in Blender.

The finished mesh and texture looks good overall, is it perfect? Certainly not, but good enough for me.

Technically speaking, you are done. You successfully photo scanned an object in real life and brought it into 3D. But what you got right now is a highly unoptimized model and texture, it will be slow and inconvenient to work with. But just for fun, let’s import it into Blender and see why it’s such a hassle to work with:

The imported model inside of Blender, with a point light above it to highlight details.
  • The import to Blender took several minutes
  • The poly count is usually in the millions
  • The textures are split into several separate materials
  • UV’s all over the place

So if you’re actually going to use it for environment scenes, animations, or even games, I suggest you keep reading. If not, you can stop reading here and explore your model in more detail in Blender. Good job by the way!

Polishing the mesh & texture

There are several ways of cleaning up the mesh and texture, I will show you the way I do it. To give you an idea of what it’s all about, it’s basically baking the details of the high poly map into a normal map, which will then later be applied to a low poly model of the tree.

Mesh clean up

We’ll start off by cleaning up the mesh of our imported model. This is because there’s a lot of noise in the actual geometry, if we were to bake the normal map of a noisy mesh, we would see this noise on the low poly as well. If you scanned something organic, or an object that doesn’t have any perfect flat faces this is usually not an issue. But since my tree has been cut it has two straight and flat sides I need to smoothen out.

It looks bumpy and unclean, time to fix this!

Let us jump into sculpting mode and start smoothing out the noisy area. I’d suggest starting with a low setting, and working your way up until you’re satisfied. It’s better to do less than too much in this case. I find that the smoothing brush does the job well enough.

Here it is in the same light, but after smoothing. Looks much better.

This is also the time to delete any unwanted geometry, maybe clean up the ground around your object and so on. I deleted the cloud looking floater above the tree and removed some vertices underneath the ground so I have a clean and flat plane underneath.

Creating a low poly mesh

We’re happy with how our high poly mesh turned out. It’s now time to create a low poly version of this tree, we can do this manually or automatically by using a modifier called “decimate”. Since I don’t really care for the topology in this case, the decimate modifier will work great. Something that’s worth keeping an eye when decimating your mesh is the smoothing of the normals. If you go too far it will create shading issues, so I suggest sticking to a somewhat higher polycount to avoid this. If you still having shading issues, we could try a few things.

500K > 5K. It does look decent but there are some shading issues.

If you have something that looks similar to my tree above, you could add a “Weighted Normal” modifier and play around with the settings. It probably won’t fix everything, but it might just be enough.

Another thing to note is that the UVs of the low poly model will be a mess after the decimate modifier, this is not an issue however since we will unwrap it and bake the textures from the high poly.

After playing around with the “Weighted Normal” modifier. Good enough!

Export both models, the high poly and the low poly to a shared folder. Because the next step is all about baking the details.

Baking textures

This is really a department Blender is lacking in, the baking workflow is sluggish and feels somewhat like a “hack” more than a feature. To solve this issue, I am actually not going to use Blender for this but Substance Painter, if you insist on using Blender, there’s a very detailed tutorial on baking here.

In Substance Painter, create a new project, choose your low poly model and uncheck “auto-unwrap”. After it’s loaded up, go to Edit > Bake Mesh Maps. Choose 4K as your output size and load your high poly mesh, press “Bake selected textures”. This might take a while, but I rather wait than fiddle around with Blender trying to bake.

Leave a Reply

Your email address will not be published. Required fields are marked *