Revoscan Flayer (Frame Player)

It’s clean

What AV are you using?
I tried on my PC and Defender blocked it first time, my solution was to open defender allow the app and download again.

You can run the Python script it is the same as the executable, executable is the script packed with auto-py-to-exe

as long as you have python installed it is a matter of popping in your command line and installing the required dependencies with this single line

pip install os cv2 tkinter numpy

then you can drop revoscan_frame_player.py on your desktop and just double click it

1 Like

My Windows Defender had no issues with it …

yeah,really weird as normally when defender says “virus detected” on totalvirus at least some of virus scanners detect some virus signature which wasn’t the case now at all.
happily there mostly is a way around. cheers

2 Likes

looks like it is a know issue with pyinstall

2 Likes

And i can now display the depth image, not sure if it is different for other scanners but this works for the POP3 scan cache

1 Like

:heart_eyes:



image
image

Here is a proof of concept pointcloud view of the dph file

_revoscan_flayer_dph_as_pointcloud

3 Likes

And the exported pointcloud for the rebuilt frame_000_0338

2 Likes

how do you do that?:astonished: what software used? I wonder what your background is.are you an engineer? I also didn’t understand how you turn those dph files to greyscale images.

thx a lot or sharing!:beers: helps me understanding how this tech works.:clap:

@ivan , that’s not how Revoscan produces exactly their models , it align multiple frames while scanning as shingles on a roof , that why the first frame is most important as all other frames will align to it’s position in space . Many of the frames will be removed later in the process while fusing and cleaning as it is not necessary, in most case 50-60% is removed .
We need so many frames to keep tracking while scanning , as that made the scanner portable , how more frames per second how more stable it is while scanning , however that has nothing to do with final model generated by the software algorithms. For that reason we get sometimes too many bad frames that did nit align right … you can actually build a model from 12 frames only over 360 degrees rotation at one steady angle and not 350 …
I wish we had that option for scanning while object is on turntable , where it captures less needed frames for more accurate results , and the normal frame amount for the portable handheld scanning .
the processing speed would be fantastic …

@X3msnake good job V , crack this baby down … :joy: you are on a good path

1 Like

I wrote my pyhton code to extract the Pointcloud to a txt file that can then be opened in cloudcompare for example.

If you want to try both reading the dph files and/or visualizing it as pointcloud/exporting to a pointcloud txt file you can have a look at the helpers folder under the project root on github

As for my background i’m a communication Designer that has strayed from the path and gone into the realms of production design with a large general knowlage on many production and technology subjects. I am a self taught programmer that knows enough coding to discern when Chat-GPT is hallucinating :wink:

2 Likes

i see! so whats the purpose of these extracted frames exactly? do they decode the point cloud which is processed later on with some voodoo-algorithms? Or is their purpose sth else? :grin:

cool!:smiley:
and your career even cooler!:nerd_face:

good to know. I might come back to you to confirm what ChatGPT claims🤣

1 Like

@ivan

I have cleanup the code to be easier to use and added instruction on how to.
Also recorded this video for you and anyone wanting to explore the files :wink:

Hope it helps

1 Like

thx A LOT,mate!!! :beers:
it DOES help😅

1 Like

Revopoint Revoscan Flayer (Frame Player) project research

Here is the first try at extracting the pointcloud with color from the dph and corresponding img files that are in a project’s scan cache folder

Unlike what i presumed at first the depth is not linear, meaning probably missing information on the camera intrinsics and therefore the capture points look distorted when rebuilt, also the resulting pointcloud is not to scale.

Secret to a propper rebuild probably lies in the .inf files with the same name it probably holds frame camera intrinsics offsets and min/max depth sizes to allow proper rebuild of the pointcloud scale and geometry

On the good side it appears that both the texture and the depth map are flatten in realtion to each other, but have some offset in xy that needs to be compensated for each frame. in this particular frame the RGB image was resized to 640x400 (from 1280x800) and corrected by X-40px and Y0.

depth frame_000_0332.dph and color .img


Polyscope

First try at a raw frame pointcloud inspector using polyscope library for the job

1 Like

First draft of a frame as a raw pointcloud navigation and display

1 Like

Export to PLY, TXT, NZP

I can now export the frames from the .dph map and .img files, as colored pointclouds with these 3 formats from within the same helper code.

The exported files all have XYZ positions plus RGB data and Z raw values

Open in CloudCompare

When exporting to PLY the file can be directly open in CloudCompare and will have RGB and RAW Z as a scalar field




Image viewpoint

When looking from the bottom the Pointcloud will appear to be the original frame image when you move you see it is a pointcloud :wink:


Flayer - inspect and play pointcloud demo

Flayer can now read all stored pointclouds from a folder and allow to interactively navigate each frame or play the pointclouds as a movie.

2 Likes

Full pop3 scan (458 frames) - realtime pointcloud replay

2 Likes

my mind blown! :exploding_head: THANK YOU!

1 Like