Thank you both for the reply.
@smf That is a great implementation, I can’t wait to have my little girl try these things out.
To be honest, my main drive towards the LeapMotion is that I’m not the best coder and I already have some interaction coded on the LeapMotion and nothing with the RealSense. I have also heard that gesture recognition on RS is not that reliable, what do you think? If I want to detect pinch and point and a virtual sphere that follows the radius of the curvature of the palm, would you go with Leap or Real S?
@dez I later discovered that you can change the scale of the Holoplayer’s viewing box in Unity and with this I have been able to be closer to the Physical World scale.
I have no interest in the appearance of the virtual hands, other than for debugging my alignment. In the end I would want the user to not need to see the virtual hands since they would theoretically be exactly under the physical hands as you interact with the virtual objects. But my interest in Leap is how they have already solutions for coding gestures like pinch and point, I’m sure this can be done with RS camera but I would have to invest more time in figuring that out than what I have spent getting the leap to work with Holoplayer.
I like the idea for rendering the detected 3D view as 1:1 with the physical world, this is the crux of the thing. Once we have that, the other thing that will help is a virtual representation (in Unity) of the physical volume of the Holoplayer (basically a 3D model of the actual device). This is because I have to position the physical Leap at a point with relation to the Holoplayer device and at the same time place the virtual Leap in Unity at a location relative to the 3D virtual view. (did that make sense?)