Ideas for better textures

Good evening Revopoint team & friends,

I’m opening a new topic because textures are very important, and they are behind in the quality spectrum compared to the :100: surface quality that the revopoint products can achieve.

I already voiced some concerns in the topic Better camera for better textures, but since the title called for a better camera, and we have already established that the camera is not the issue, I’ve created this new topic to see if we can figure out ways of helping the team improve the texture-making software.

I’d like to start by saying: I love the scanners I have. The surface is amazing, and I find it intuitive and easy to use. I am also very interested in acquiring the upcoming Range. But the textures are still lacking, and I’m hoping that we’ll see them improve in quality over time. (hopefully soon!)

That being said, I’d like to get some feedback from the team/other users, and contribute with ideas to find better ways of making textures.

Today I had an idea. Something I would like to try.

Would it be possible to have Revoscan record a video while scanning and save it in an accessible folder?

I’ve been experimenting with photogrammetry in meshroom using frames extracted from video. The results I’ve got, considering that the video was taken with my motorola phone, are not bad, but only because of the textures. If you take a look at the screenshots, you’ll see that the mesh is awful, but once the high-res textures are applied, the quality becomes quite good.

It’s possible to feed a custom mesh to the meshroom algorithm to do the textures, as long as the mesh is in the same possition as the calulated pointcloud.

So here’s my idea:

It would be worth trying to import the :100: mesh scanned with revopoint, align it in the correct possition (with CC), and then feed said mesh to the meshroom algorithm to project the textures on it.

If it were possible to access the revoscan video, we could use it to create frames to make a pointcloud in meshroom, then align the revopoint mesh to the meshroom pointcloud (will have a 100% match,) and then project the textures.

What do you guys think? Is it doable? Has someone tried it?

Thank you in advance!

Fascinating approach, and I think it could work.

Access to the internals (via an SDK) would be required, at a minimum, but I expect that Revo Scan would have to be augmented to save the Color Camera images as video in order for it to work.

As things stand now, I don’t think the tools exist to do what you are proposing.

1 Like

Hi Richard, I did this already but not using video, just decoded the frames from Revo Scan and imported to Meshroom and processed into a model , but there was not match , the depth frames do not have the same dimension as the RGB camera sees only half what the depth frames captures. Too much hassle… I would prefer already just capture the pictures with my DSLR and use Meshroom in place of double work .

They could use photogrammetry based texture mapping in Revo Scan if they wanted , so you could use your own DSLR or Phone to capture super sharp photos for the textures , that would be the best solution . You scan your model then you capture your pictures every 10 degrees and the job is done without leaving Revo Scan … Wishful thinking …

1 Like

Look @RichardWren , there is hope for a little better textures … you use POP2 right ? will test it tomorrow, this one is MINI .

1 Like

Quick test with POP2

the fragment is only 7 cm tall


Thank you @PopUpTheVolume I’m using POP 2 most of the time, but I also have a MINI.
Those textures look very decent :clap:
How? This is exciting news!

Just testing out updated software , the POP2 texture mapping seems to get improved by 200%.
Never saw text so clear before at 17 cm scanning distance .


Kudos to the devs! This looks very promising :star_struck:
Can’t wait to try it!

1 Like

Hello friends,

I tried the new update to check the improvement of the textures. Below is what I’ve observed.

For this experiment I scanned a new scone (this time chocolate chip) to compare with the berry scones I did in my other post about textures. This is the result:

As a first impression I’d say the textures look much better, and they are also a bit sharper, so really, kudos to the devs for their hard work!

However, there is still a lot of room for improvement. Even if the textures are better, they are still not suited for professional work.

Here’s another experiment I did:

Tried exchanging the photogrammetry mesh with the Revoscan mesh in Meshroom for texturing. It worked!

The trick is to align the mesh to the exact possition, and then join the revo mesh into the photogrammetry mesh, so it keeps the same scale, origin, rotations, etc…then, delete the photogrammetry mesh in edit mode, export in the same folder with same name… done.

Below you can see a comparison photos:

The revoscan mesh is seriously :100: compared to my photogrammetry mesh (pre smoothing), although the photogrametry captured the crevices much better.

But the photogrammetry textures, as expected, are much sharper than the scanned (although it’s getting there!)

Closing thoughts: Devs should consider implementing PUTV’s idea of allowing for custom imput of highres images. Until then, the method of using Meshroom for texturing works very well. It does take a long time to calculate, depending on the number of images one feeds the algorithm. I’ll make another test with less images to document if it’s worth the trouble.

Thanks for reading!


Impressive results.

How much time did you put into replacing the texture?

1 Like

Taking aroung 300 photos for the photogrammetry using the step function of the HTML turntable by @SphaeroX and @eXplOiD was less than 20 minutes.

I left Meshroom working on high resolution overnight (not sure how long it took, probably over three hours), and then another 20 minutes or so the next day to import, align and calculate the textures.

Now I’m testing using around 30 photos only. I’ll keep you posted on how that one goes!


Great job Richard,
I’m looking forward to trying this out and seeing what your future experimentation brings as well

1 Like

Hi Richard , great idea with the re-texturing using Meshroom

I made also some more tests with POP2 , the regular way and I am happy with it as it improved so much compared to what it was before .

Object size 60mm /2.5 ‘’
Scanned with : POP2
Scanning time : 70 seconds
3D Unbiased Renders : Substance Painter


Here unofficial Beta Range 3D Scanner/ color scans 8inch/200mm
Still need improvement to get to POP2 level , however when you scan big objects this really not a huge problem anymore as you can’t put more data into 4K textures anyway, Range needs at least 8K for full potential .

Range 3D Scanner

Range 3D Scanner

Range 3D Scanner

Range 3D Scanner


These scans look incredible! I may need to get my hands on a Pop2 to supplement my Mini. It seems like you do most of your scanning with the Pop2.

1 Like

For color scans POP2 right now is the winner , MINI can’t easy capture all colors so it get problematic… POP2 is universal .

Hopefully Range Scanner get better textures quality with the time as well .

1 Like