Since you came here, you’ve probably read my blog post on accessing the SpaceMouse’s raw sensor data from a JavaScript environment.

Some of you may be scratching their heads why one would want to access raw sensor data in the first place, or how they might be put to use. Others may feel intimidated by the sheer unlimited freedom, which raw data from a 6DoF input device allows. I’m writing this post for you!
Basically, there are two paradigms for using a SpaceMouse device in a meaningful way. After all, this is about 6 degrees of freedom, which incidentally matches the count of degrees of freedom in 3D-space, namely three axes of translation and three axes of rotation.
- The first of the two paradigms is the «Object» paradigm. You use the puck of a SpaceMouse like you would be interacting with the object itself. Whatever shifting, twisting or tilting you do to the puck, is reflected by a corresponding translation and/or rotation of the object.
- The second of the two paradigms is the «Camera» paradigm. You use the puck of a SpaceMouse like you would be interacting with the camera itself. Whatever shifting, twisting or tilting you do to the puck, is reflected by a corresponding position and/or orientation change of the camera.
In this post, I’m presenting a simple demo program for a SpaceMouse-with-Three.js application which processes raw sensor data of a SpaceMouse into an intuitive control of five axes in 3D space. Evaluation of the sixth axis is also implemented, but disabled by default. More on this further down this post.
This demo program works with the current (as of this writing) version 0.140.0 of three.js. See the respective import statement in source code.
Rotation:
Each axis of rotation can be independently controlled, without tainting the rotation of the remaining axes, and without sacrificing their simultaneous control. Control of Roll-axis has been disabled by default, but can easily be enabled by setting one variable to ‘true’. I suggest novices at 6DoF devices keep the default setting until they have acquired the necessary level of eye-hand coordination skills. Yes, folks, it does take some practice to become proficient at handling a 6Dof input device. But it’s clearly doable and it’s worth it, imo.
Attached to the camera is an (invisible) orthonormal coordinate system, aligned with the camera’s line-of-sight and up-direction, which I’m showing from a 3rd-person perspective for documentation purposes only:

Note the red/blue/green camera-axesHelper:
blue: line-of-sight
green: up-vector
red: side-vector
Also note, that for the sake of intuitiveness I’m intentionally mixing systems of reference when implementing rotations:
I’m processing a twist of the SpaceMouse puck around its z-axis (see introductory image) as a camera rotation about the Three.js world y-axis, which is what you as a human do when looking left or right around your own vertical axis.
Tilting of the Spacemouse puck around its x-axis is processed as a camera rotation about its local x-axis, which is what you as a human do when looking down to your feet or up into the sky.
Tilting of the Spacemouse puck around its y-axis is processed as a camera rotation about its local (Three.js) z-axis, which is equivalent to the camera’s line of sight. There is no equivalent in typical human experience for this type of roll rotation. Except maybe you’re an experienced member of an aerobatics team or a fighter jet pilot.
Even the poster children of banking movements, motorcycle riders, (should) avoid banking their heads during cornering:

Image source: https://motofomo.com/best-motorcycle-riding-books/
And if one looks closely, birds keep their heads level during curved flightpaths, too:

So all in all, living creatures are apparently not meant to perform roll rotations. If you do it anyways, you’ll be leaving the realm of being «intuitive». That’s why I’m disabling this by default.
Translation:
When rendering the 1st-person view of the SpaceMouse-controlled camera, puck-displacement forward-backward corresponds to the camera dollying along the line of sight (blue). Likewise will a puck displacement along the red line effect a “horizontal” pan in screen space, and a puck-displacement along the green line a “vertical” pan in screen space, irrespective of the current camera orientation.
Gamers among you will recognise this as the «1st-person» perspective. Which is, what makes this perspective so intuitive. 😎
Development
Development of this driver was not as straightforward as it might seem. I intentionally split the sensor data evaluation into a rotation part and a translation part. Rotation sets the cameras orientation, that is: a 3D direction into which the camera is pointing. Do not confuse this with the Three.js ObitControls «target», which is a 3D point in space. In OrbitControls, the camera’s angles of rotation are constantly re-computed to make sure, the camera is constantly looking at the target point. In my driver, the camera’s orientation (line of sight) is maintained during camera translations: the line of sight is shifted in parallel – absent any rotational input from the SpaceMouse.
While my concept was straightforward, there are always opportunities to confuse sines with cosines, positive or negative signs and many more. When I tried to identify such errors when looking at the view from an unfinished and still erroneous camera control and conclude backwards as to why the view was different from my expectation, I got dizzy very quickly. Until I finally came up with the idea, to view the SpaceMouse controlled camera (including its frustum and local coordinate system) from a fixed 3rd person perspective. Which gave me the much needed insight into the nature of remaining implementation errors.
Except for an import of Three.js, my one-file demo is completely self contained and comprises approx. 350 SLOC, including comments and minimal HTML (200 LLOC).
Prerequisites:
- SpaceMouse or SpaceNavigator
- compatibel (i.e. WebHID enabled) browser.
- Internet access, to resolve the import of three.js
Really interesting project! Many thanks for your work on this!
Just one question:
if you have the «standard» driver installed to use your spacemouse/pilot for local CAD-Software
– > is there no competition for using the device and the standard driver blocks any access to the device from the bowsers webusb?
Thanks for your interest in this work. I would expect (without having surefire knowledge), that a SpaceMouse can establish/maintain only one connection at a time. Which should be easy enough to simply try out.
On 2nd thought: I can’t think of any scenario where you would want to process the puck movement from one Spacemouse in a CAD system and a second Program of your own at the same time.
thanks for getting back that fast! Regarding your comment on the 2nd thought: of course you do not use a website with 3D-content (threejs) and a CAD-tool at the very same time 😉
But you have your Cad-tool and the browser open at the same time and may switching from browser to Cad and back several times over a working day.
If you always have to unload / reload a driver (manually or even automatic) this is realistic not expectable from website users…
You can also think of situations where 3D-website and CAD are open/in for ground e.g. on different monitors. I often have this use case to switch between different tools with 3D-content opened side by side (CAD and FEM, both local). In this case it works flawless in the same way like e.g. the scroll wheel of a mouse does: simply click with the mouse to the window you want to be active/use.
Of course there is no driver switch needed when staying local.
This is the way it is desired for usage in combination with with a browser too 🙂
hmm loading /reloading drivers is imho very slow and may even involve windows system notices to be confirmed.
Do you have any ideas on this scenario?
I can understand that in a windowed environment the task switch between a window with your CAD application and a browser (tab) with possibly another application which would want to have SpaceMouse input, too, is easier said than done. As a raw idea i can imagine, that both applications would need to implement a «lostFocus» handler and a «gotFocus» handler, where the lostFocus handler would be responsible for terminating an existing connection, and the gotFocus handler likewise be responsible for establishing a connection. I have a hunch that there will be many devils lurking in many details of this, not the least of which would be the required cooperation of the provider of the (probably) closed-source CAD application. As a cheaper solution I can also imagine to be using _two_ seperate SpaceMice, with each one maintaining its own connection to its dedicated application.
just played a little… looks like its not that complicate at all 🙂
At the very first glance it looks like (at least with the latest 3dcon driver):
You can find and connect to the device even if CAD with 3dcon usage is open
and with this CAD and Browser seems to get all data in parallel.
-> So you only have to suppress movement on the website if window is not active/focus.
But somehow I have the feeling that this is a little like using bluetooth connections in windows.. most of the time its working but sometimes the right profile is not loaded…
needs lots of testing