Create Textures for revopoint scans by using photogrammetry

By using Reality Capture, a awsome and now free Photogrammetry Software we can create stunning textures for our scanned meshes by using high quality images from external devices for our textures. Also the texturing alogriths gives us much more control on the generated textures. from a single 240 pixel texture up to multiple 32k textures. I made a youtube tutorial on the workflow i use on this topic. Give the video a like if you found it useful, give in a comment some feadback, leave a follow not to miss other tutorials that i will puplish over time and check my channel for more.


Thanks for sharing I will check it out later today .

it’s still a lots of work because we need to align both data sets manually in blender but i requested a markerr detction for revoscan what will let us skip the whole process in future. so you can do it like here:

I have also my own way to do that actually 2 ways with one manual texturing .
Always great to have more possibilities and options .

do you have any documentation on this? i’d love to see other possiblities as well

Not really documentation , I am doing it for over 25 years for my work .

1 Like

do you mid to share your ways? would love to know if there is a esier way than manual alignement. with manual texturing you talk about sth like substance painter and quixel or refer to sth like texture paint/project from view e.g. in blender? i tried once to create a alembic from photogrammetry export it with cameras into blender and assign the images to prject them from view but it appeared even more work intensive.

Yes it is very work intensive , not easy fix by loading images and done ,
Lots of hand work and time consuming .

The only easy way is to scan the same object using photogrammetry and using Revopoint scanner , after photogrammetry model is done , export it and align the Revo Scan model exactly to it’s position and import Revo Scan model back to your photogrammetry program and run re-texturing to get the best of both worlds perfectly.
There is not easier way .

so exactly what i do in the tutorial… create a lowpoly in phgr for registration of the scan and import the tranformed scan into photogrammetry to texture. what i don’t show in the tutorial i will import the model back into blender aver texturing and reverse the scaling so it’s back to the original size. (it hadn’t been necassaryy here bcs the photogrammetry was scaled exactly) where do you aligne the meshes? heard that cloudcompare has some powerful tools for that as well, but never came along to try :smiley:

Yes the same standard workflow .
Cloud compare is very powerful , but to alignment of the mesh you can use any 3D modeling / editing software .
It is not a rocket since if you are good in modeling .

The best of this is that you can actually remesh the Revo Scan model , create proper UVs before so the result are great .

I am doing texturing work mostly for PBR rendering , so typical photo textures are not much usable because of the shadows and light data that need to be removed before processing so still lots of work .

i know what you are talking about haha. for outdoor photogrammetry i always hope for days with overcast weather and neat diffuse lightning. But i’m tomorrow my godox ar400 will finally arrive with cross polarisation filter and one on the dslr too (polarising my phgr already) same i did with the miraco and some sticky pol film. like you suggested else where too i just recently saw.

That is great ! I can’t do the work without cross polarization , saving times and work , also using it on my DSLR , I came across the idea to stick the film on Miraco because I use the film on my LED panels and it worked beautifully, at least getting nice scans with some proper color references for future processing what is a step forward .

exactzly the same with me. realised so much reflectations in my miraco textures and resulting overexposed/underexposed areas i came up with the same thought to use some left over pol-film i had to cover my LED panels for Photogrammetry and put it on the miraco. appears we have pretty same mindsets haha

1 Like

Yes we do , great minds thinks alike my friend !

I don’t know if I understood it correctly. You do a 3D scan and then do the whole thing again with photogrammetry. Then you take the UV maps from the photogrammetry and place them on the 3D scan, right?

And the advantage is that you basically combine the good 3D model from the scan with the good texture from photogrammetry.

In general you are right with one small failure in understanding. You don‘t take the uv from the photogrammetry nor do you reproject it’s texture. You can create a new set of UV mapping for your scanned model that fits your needs from 500/500 pixel up to several 32k uv-maps in realitycapture but also can you create UV mapping for the scan in other software like using the existing one of revoscan or use something like blender to set up clean uv-mapping. UV maps only determine which part of a texture image is to place on which location on the mesh. We will then gerate a fresh texture for the scan from the images we took for the mesh. This will give us the opportunity to use a separate set of images for the texture which can pushed to extraordinary detail. You either can use just a bunch of mobile phone images or go extrem and use a monster like this

and shoot in raw postprocessing the images before texturing in eg Lightroom. We need to calculate a photogrammetry model only for the purpose of have a reference where the images are located in the 3dimensional space of realitycapture so if we import the scan the images will face at the correct parts of the mesh when we project them (-> create a texture) onto the scan.


I don’t think I can add more to reply .

But in short …the Photogrammetry model is just a dummy in this case , providing the space coordinates for our scan to be align correctly in the 3D space and replace the dummy .

Simple mobile phone photos can still provide greater quality and resolution textures than the scanner and on top you can have your own custom UVs and clean topology … magic :magic_wand:

1 Like

Amazing work here and very informative thread. Alot to unpack and test out. Thanks everyone for all the transparency and effort you are putting into making this community the best it can be!


I can also recommend a simpler solution with the usage of Meshlab. You take a few pictures of your object at about 30° rotations and directly align these images onto your model.
There’s a quite old tutorial of mine showcasing the whole process, but it’s still relevant so you could give it a go:

1 Like

this might be an option too but i feel like i don’t want to assign each image manually to the mesh :smiley: what will speed up my workflow is sth i totally missed out. the alignement feature in cloudcompae so the whole aligne manually in blender can be skipped by register the scan and phgr mesh in cloudcompare. but i think everyone has his workflows he likes better. As i come from phtogrammetry i prefer working with the enviroment i’m used too.