I’ve decided to start a topic showcasing the scans I did and the scans I plan to do. I’m gathering all in one topic because is more convenient, I hope that’s okay!
The reason I’m back is because I saw the results of the new POP 3 and I got very interested in trying it out. You might have seen me complaining about the textures of POP 2 a lot and now I’m eager to test out the new capabilities of the POP 3!
For my work as a 3D asset creator I use photogrammetry all the time to make my models. But photogrammetry takes a very long time to do, because one must take a few hundred photos and then process them in the photogrammetry software (that sometimes can take several hours to compute), so as you can see I was very excited to see how fast scanning with these scanners is! And the mesh is , the only problem are THE textures!! but that, with a bit of patience, will soon be a problem no more! (It seems that the Revopoint devs listen to our suggestions so I hope that my posts can somehow contribute and encourage them to reach the level of photogrammetry. That would make my job so easy! )
The things that I scan (or plan to scan) are rustic or antique everyday objects, food, and natural elements such as plants, rocks, bark, etc… I also take scans of people (for my 3D character creator), and for that I’m using RANGE. I plan to use MINI to scan jewelry.
I hope you enjoy the posts. Feel free to reach out if you have any comments or questions.
Thank you for reading!
I make low-poly models for videogames, realtime web visualization and animations. I use them in my animation projects and also sell them in my shop
(For those who wonder how it’s done, I take the scans and then make a copy with a very small number of polygons, it looks very boxy and horrible, but then I project the details and the textures from the scans and store that information into texture maps. I combine those textures into a material, and because all the info from the scan is there, it makes the blocky models look almost as realistic as the scan, but they load and render very fast.)
Richard doing the same type of work as me too , and that is great to see …
@RichardWren thanks for sharing , hopefully we can see more of your new scans with POP3 and textures … just make sure you don’t use too much of ambient light while scanning using the LED … less is better for more accurate result , if you like shadowless and specular less albedos , convert RGB to textures after the point cloud us meshed . It works great .
Thanks for the tip Catharina, I tried this last night, I scanned a peach. The geometry was incredible! I can see how they improved the tracking so much! But I encountered some problems.
The peach (maybe is because of the skin that is so fuzzy) looked darker on the top and bottom sides when I scanned with the led light only. I did a second test with the softbox on and the shadows were less obvious. I supose it must be because of the fuzzy skin of the peach.
But the problem I found was the following: When I tried to merge the pointclouds, the overlapping was not 100% true, and even though I passed the overlapping filter several times, the resulting mesh had ugly artifacts in the parts that overlapped the two pointclouds. I supose Revoscan is still not a good substitute for CC when it comes to merging pointclouds. Also the albedos too got combined, and because of the fuzzy shadows there were random dark spots.
And the last thing I noticed was that once I merged the pointclouds, we lose the hability to apply textures. I can do the textures in each separate part, but not the final merged. That is something that I missed.
Other than that everything looks very promising! I’ll keep doing tests and post the results when I have a moment.
When you scan. Things like that , don’t let it be too close to darker areas , always put things on top of something , I use a plexy tall glass to avoid differences in texture, if you out object direct on black surface , it will always be darker on the bottom .
When merging 2 objects make sure the distance of both scans us the same to avoid differences in pitch point distance .
Merging objects with less features is not easy even for CC , always risky .
Some materials like velvety or silk will not always looks perfect due to Fresnel effect where light break from angles changing color at angle . But that’s easer to fix
Point color still a bit meh, but I think if I crop the mesh before I merge them it could be perfect. It’s a pity that we lose the ability to texture though… Would it be hard to program? To have the texture option even in merged point clouds or in imported pointclouds? That’d be awesome!
Feature sugestion for the Revo Scan: Let us crop the raw before fusing so it goes faster. I sometimes use a white disk with random shapes on the turntable for better tracking, and it adds to the pointcloud almost double the points. It’d be nice if we could remove the points from the helpers (or from the weird background/turntable artifacts) before fusing.
Another feature sugestion for the Revo Scan: When we merge two pointclouds, give an option to simply align each pointclouds with eachother without merging, so we can delete parts that are repeated before merging, or to export each part and perfect the alignment in an alternative software (or give some control to align it better if the automatic align didn’t work well) That’d be a really nice option to have
Not really, the overlap option is for detecting points in the cloud that overlap so they can be deleted and reduce the amount of noise in the final mesh.
This feature I request should be part of the merge section, in Cloud Compare is called “Clouds Registration”, and it finely registers aligned pointclouds making them match almost perfectly. Also, I’d like the devs to give the option of aligning (and registering) the clouds without mergin them automatically, so for instace, once aligned, I can crop some overlapping parts that are not needed, or trim a bit the points that have darker colors, etc…
I know what you talking about Richard like we have in CC , right now we cant see both scans at the same time to see what part to cut off , just eyeballing .
It is always recommend to cut the parts that was already scanned and not merge everything as that can result sometimes in lower accuracy or blurred areas.
Having 2 scans visible would allow for a better guide
I have a feature request for the Turntable controler, and it’s that it allow us to change the turn from Clockwise to Counterclockwise, the same as we can change the angle, so with a slider, be able to switch in the middle of a scan the turn direction
Today I scanned this nigauri. I feel like POP 3 works much better than POP 2, and the tracking seems a lot better as well, but the textures are still not great, however, they do seem to have improved from POP 2… I think the led light was a great addition to POP 3, because it makes the vertex color more even (like a true albedo)
This is not really textures, it’s the vertex color. I don’t have textures because I had to merge three scans from top, right and left side of the nigauri, and I lost the texturing ability in Revoscan. Also, I had to merge and clean the pointclouds in Cloud Compare.
So, the scanner is amazing. Works like a charm. But there are still some features that I’d like to have in Revoscan so there’s no need to use other softwares.
1- to be able to align the clouds in Revoscan without merging them (and then an option to merge as a separate button). That way I can, for example, mesh and texture them in Revoscan and then export it to Zbrush and project the textures to the final piece using masking, so the transition between parts is smooth. Or I can trim the overlapping parts before merging the clouds to avoid those dark patches we saw in the peach I showed before.
2- to have a better cloud simplify (similar to the subsample in Cloud Compare) that reduces the points evenly, so I don’t need to use CC
3- It’d be really great if we could retain the texture mapping after aligning/merging clouds somehow
That’s today’s observation. I apologize if I’m not making much sense… my brain is ultra foggy
Great details Richard , if you create a normal maps and use with the ( albedo ) it will looks great in PBR rendering , you will need to convert the RGB to Textures after you created the UVs in a separate program . I actually prefer the color per vertex , just perfect in combination with normal and displacement maps .
You can also bake in all the shadows from displacement or nor al maps into the Albedo , so many possibilities .
I don’t know what size was the object you scanned , but the details of the mesh are great .
so how you can now texture this merged model , well there is one trick , scan the same object from all sides in one session , then use the same object to merge with other sides you scanned , making sure the first object is selected as 1 , after you merge the object , you can replace the mesh with the first scan in the directory after export , then after you reload the project the merged mesh will be visible as the first scan and you can texture it , just make sure the merged object is in the same space position and have normals if you edited it outside Revoscan .
Yes exactly , just scan the object from all sides so you can capture all the needed textures , this object will be the first selected for merge as well , so you lock in the position for the merged scan so it can be replaced with the mesh later and re-textured . When I have time I need to make tutorial on that … it is easy once you know what to do .