Some of you may be scratching their heads why one would want to access raw sensor data in the first place, or how they might be put to use. Others may feel intimidated by the sheer unlimited freedom, which raw data from a 6DoF input device allows. I’m writing this post for you!
Basically, there are two paradigms for using a SpaceMouse device in a meaningful way. After all, this is about 6 degrees of freedom, which incidentally matches the count of degrees of freedom in 3D-space, namely three axes of translation and three axes of rotation.
- The first of the two paradigms is the «Object» paradigm. You use the puck of a SpaceMouse like you would be interacting with the object itself. Whatever shifting, twisting or tilting you do to the puck, is reflected by a corresponding translation and/or rotation of the object.
- The second of the two paradigms is the «Camera» paradigm. You use the puck of a SpaceMouse like you would be interacting with the camera itself. Whatever shifting, twisting or tilting you do to the puck, is reflected by a corresponding position and/or orientation change of the camera.
In this post, I’m presenting a simple demo program for a SpaceMouse-with-Three.js application which processes raw sensor data of a SpaceMouse into an intuitive control of five axes in 3D space. Evaluation of the sixth axis is also implemented, but disabled by default. More on this further down this post.
This demo program works with the current (as of this writing) version 0.140.0 of three.js. See the respective import statement in source code.
Each axis of rotation can be independently controlled, without tainting the rotation of the remaining axes, and without sacrificing their simultaneous control. Control of Roll-axis has been disabled by default, but can easily be enabled by setting one variable to ‘true’. I suggest novices at 6DoF devices keep the default setting until they have acquired the necessary level of eye-hand coordination skills. Yes, folks, it does take some practice to become proficient at handling a 6Dof input device. But it’s clearly doable and it’s worth it, imo.
Attached to the camera is an (invisible) orthonormal coordinate system, aligned with the camera’s line-of-sight and up-direction, which I’m showing from a 3rd-person perspective for documentation purposes only:
Note the red/blue/green camera-axesHelper:
blue: line-of-sight green: up-vector red: side-vector
Also note, that for the sake of intuitiveness I’m intentionally mixing systems of reference when implementing rotations:
I’m processing a twist of the SpaceMouse puck around its z-axis (see introductory image) as a camera rotation about the Three.js world y-axis, which is what you as a human do when looking left or right around your own vertical axis.
Tilting of the Spacemouse puck around its x-axis is processed as a camera rotation about its local x-axis, which is what you as a human do when looking down to your feet or up into the sky.
Tilting of the Spacemouse puck around its y-axis is processed as a camera rotation about its local (Three.js) z-axis, which is equivalent to the camera’s line of sight. There is no equivalent in typical human experience for this type of roll rotation. Except maybe you’re an experienced member of an aerobatics team or a fighter jet pilot.
Even the poster children of banking movements, motorcycle riders, (should) avoid banking their heads during cornering:
Image source: https://motofomo.com/best-motorcycle-riding-books/
And if one looks closely, birds keep their heads level during curved flightpaths, too:
So all in all, living creatures are apparently not meant to perform roll rotations. If you do it anyways, you’ll be leaving the realm of being «intuitive». That’s why I’m disabling this by default.
When rendering the 1st-person view of the SpaceMouse-controlled camera, puck-displacement forward-backward corresponds to the camera dollying along the line of sight (blue). Likewise will a puck displacement along the red line effect a “horizontal” pan in screen space, and a puck-displacement along the green line a “vertical” pan in screen space, irrespective of the current camera orientation.
Gamers among you will recognise this as the «1st-person» perspective. Which is, what makes this perspective so intuitive. 😎
Development of this driver was not as straightforward as it might seem. I intentionally split the sensor data evaluation into a rotation part and a translation part. Rotation sets the cameras orientation, that is: a 3D direction into which the camera is pointing. Do not confuse this with the Three.js ObitControls «target», which is a 3D point in space. In OrbitControls, the camera’s angles of rotation are constantly re-computed to make sure, the camera is constantly looking at the target point. In my driver, the camera’s orientation (line of sight) is maintained during camera translations: the line of sight is shifted in parallel – absent any rotational input from the SpaceMouse.
While my concept was straightforward, there are always opportunities to confuse sines with cosines, positive or negative signs and many more. When I tried to identify such errors when looking at the view from an unfinished and still erroneous camera control and conclude backwards as to why the view was different from my expectation, I got dizzy very quickly. Until I finally came up with the idea, to view the SpaceMouse controlled camera (including its frustum and local coordinate system) from a fixed 3rd person perspective. Which gave me the much needed insight into the nature of remaining implementation errors.
Except for an import of Three.js, my one-file demo is completely self contained and comprises approx. 350 SLOC, including comments and minimal HTML (200 LLOC).