I am “upgrading” POP2 to use an external reference frame/coordinate system which will drastically improve the accuracy of scanning and I think the resolution of the final scan. For this I use a tracker rigidly mounted on the camera. Because of the fact that there is an offset (both rotation and translation) between this tracker and the cloud point space, I need to know what this matrix is to get it to work. This is made tricky by the fact that there are many coordinate frames associated with this type of camera. There’s RGB camera space, Left IR camera space, Right IR camera space, Structured light space (the center of the camera) and finally point cloud space which could correspond to none of the above depending on specifics. Inside the calibration parameters saved by “revopoint calibration” you will see some openCV calculation logs and other easily deciphered binary files (mostly 4 byte floats). This and experiments lead me to believe that the point cloud X center (axis of length of the device) is aligned with the LEFT infrared camera, not the center. Furthermore the point cloud origin appears to reside 6mm inside the body of the scanner (though this is an imperfect measurement and could well end up to be 0; at any rate the cloud origin is not dramatically shifted with respect to the forward surface of the scanner which is the positive Z direction). It appears that the 2D calibration (stretching and moving the rgb and depth images) is taken care of by the revopoint software before the point cloud is saved so I am ignoring this aspect of image acquisition (though I am not absolutely sure that it’s ok).
Viewing this problem from another angle (no pun intended), the problem is reduced to finding a matrix X such that multiplying a point cloud times [matrix T of the tracker world position] * [inverse of X] makes all the points go to their proper place in world space. So X is the transform that takes you from cloud space to tracker space.
X itself is composed of [X1 tracker to scanner body] * [X2 scanner body to left IR image sensor] * [X3 left IR image sensor to point cloud space]
What I tried:
used CloudCompare software to manually align two clouds (eg c2 to c1) and got a good matrix (c21) for that. This empirical result should be identical to the sequence of steps taking c2 back to world position and then back to c1. For instance c21 = X*inv(t2)t1inv(X) - However we were unable to solve for X using all the matlab toolboxes known to man
created 3d model of the entire assembly and the relative axes, and manually found a matrix that takes me from marker space to left IR camera space. This “sort of” places the clouds together but there isn’t a perfect match, because the cloud origin point is NOT exactly the left IR camera position on the surface of the device (could be a bit inside, not sure by how much, and not exactly in the vertical center of the scanner body, and also there could be some rotation, as evidenced by “left camera rotation” in the calibration files being non-zero while the translation offset is set to zero for the same camera).
If anyone is good with this type of task I can provide PLY and TXT files with the point clouds acquired, and the tracker position and rotation that correspond to each, and you can see if you can find a way to get an alignment.