Apply the 3DMouse rotation vector input to an orbit camera

Post questions, comments and feedback to our 3Dconnexion Windows Development Team.

Moderator: Moderators

Post Reply
eduardobrites
Posts: 9
Joined: Wed Oct 09, 2013 7:31 am

Apply the 3DMouse rotation vector input to an orbit camera

Post by eduardobrites »

I'm having trouble about applying the rotation vector to my orbit camera. It only works well if the current camera's ViewVector (Target - Position) is (0,-1,0), otherwise it performs a wrong rotation.

On the examples I've seen so far the rotations are applied to the object, the camera stays still.

I would like to know how can I calculate the new vectors for my camera (position, target and up) based on the 3Dmouse roation vector input.
jwick
Moderator
Moderator
Posts: 3339
Joined: Wed Dec 20, 2006 2:25 pm
Location: USA
Contact:

Re: Apply the 3DMouse rotation vector input to an orbit came

Post by jwick »

The device just senses the directions you are pushing/twisting with your hand.

Since you are looking at your display and probably have your 3D mouse aligned with your display, those translation and rotation vectors are in "your" space. This is usually also the position of the camera. Therefore the vectors, the numbers, are in eye/camera space.

When modifying a transform in a program you must determine in which space are the parameters you have access to. Most of the time, the camera parameters are the location and orientation of the camera in object space (they are object space numbers). To modify those parameters, you need to transform the camera space input vectors (from the 3D mouse) into object space vectors. After that you can directly apply them to the camera parameters (specified in object space).

How do you do this transform? You have to find (or calculate) a vector transform that converts from eye space to object space, an EyeToObject matrix. It is probably available in the graphics API's camera definition. If you can only find an ObjectToEye matrix, then the inverse of it is EyeToObject. It may also be called something like CameraToWorld--whatever they name their coordinate spaces. CameraToWorld is just the Camera position in the World coordinate system (CameraInWorld). It includes both a position and an orientation. The orientation is probably all you care about.

It works before you move the camera because your camera is aligned to the object space. The xform is an identity matrix.

Once transformed to the correct space, the translation vector can be simply added to the exiting camera position. How you apply the rotation may make a difference. You can't accumulate rotation vector components. You have to use an orientation for accumulation. That is, a matrix, or a quaternion. If you simply accumulate in your camera, that is probably fine.

Our SDK has documentation, samples and code to do these calculations.

When I'm confused, I printout the values and examine then and how they change. Then I make changes one degree of freedom at a time until I understand what is going on.
Post Reply