## Combining translation and rotation into correct matrices. CAD package

**Moderator:** Moderators

### Combining translation and rotation into correct matrices. CAD package

Hi,

I'm implementing the normal 6 DOF space mouse into our CAD package (www.formware.co)

The translations XYZ of the camera are all working. Impressed by how smooth this went.

I'm struggling with how to correctly implement the rotations.

I see in the SDK documentation something mentioned about a single matrix but this is very unclear to me what this does.

I'm using C# so i can't check the Math DLL easily.

Is there any more documentation or example on how to handle this?

My current code (still pretty short) is below.

I have the CamToWorld and WorldToCam matrices present and they seem to be working.

All movements are summed up and multiplied with CamToWorld at the start of the movement.

Question is how to insert the rotation data in here to get to a final UP, EYE and TARGET vec3's

kind regards,

Elco

if (spwEvent.type.Equals(SiApp.SiEventType.SI_MOTION_EVENT))

{

if (!MouseMoveConnexionStarted)

{

MouseMoveConnexionStarted = true;

Connexion_T = Vector3.Zero;

Connexion_R = Vector3.Zero;

Connexion_Up_Start = FG.Up;

Connexion_Eye_Start = FG.Eye;

Connexion_Target_Start = FG.Target;

Connexion_CameraToWorldMatrixFromStart = Matrix4.Invert(FG.GLViewMatrix);

}

//get the mouse data from the event.

Connexion_T.X -= spwEvent.spwData.mData[0];

Connexion_T.Y -= spwEvent.spwData.mData[1];

Connexion_T.Z += spwEvent.spwData.mData[2]; // zooming

float speed = 0.001f;

Vector3 txy_world = Vector3.Transform(new Vector3(Connexion_T.X, Connexion_T.Y, 0), Connexion_CameraToWorldMatrixFromStart) * speed;

Vector3 tz_world = Vector3.Transform(new Vector3(0, 0, Connexion_T.Z), Connexion_CameraToWorldMatrixFromStart) * speed * 5;

//set UP, EYE, TARGET

FG.Eye = Connexion_Eye_Start + txy_world + tz_world;// move the eye XY and zoom Z

FG.Target = Connexion_Target_Start + txy_world;// move the target

//FG.UP ???

SetPerspectiveandView(); //update 3d view.

glControl1.Invalidate();

}

else if (spwEvent.type.Equals(SiApp.SiEventType.SI_ZERO_EVENT))

{

//ends movement.

MouseMoveConnexionStarted = false;

}

I'm implementing the normal 6 DOF space mouse into our CAD package (www.formware.co)

The translations XYZ of the camera are all working. Impressed by how smooth this went.

I'm struggling with how to correctly implement the rotations.

I see in the SDK documentation something mentioned about a single matrix but this is very unclear to me what this does.

I'm using C# so i can't check the Math DLL easily.

Is there any more documentation or example on how to handle this?

My current code (still pretty short) is below.

I have the CamToWorld and WorldToCam matrices present and they seem to be working.

All movements are summed up and multiplied with CamToWorld at the start of the movement.

Question is how to insert the rotation data in here to get to a final UP, EYE and TARGET vec3's

kind regards,

Elco

if (spwEvent.type.Equals(SiApp.SiEventType.SI_MOTION_EVENT))

{

if (!MouseMoveConnexionStarted)

{

MouseMoveConnexionStarted = true;

Connexion_T = Vector3.Zero;

Connexion_R = Vector3.Zero;

Connexion_Up_Start = FG.Up;

Connexion_Eye_Start = FG.Eye;

Connexion_Target_Start = FG.Target;

Connexion_CameraToWorldMatrixFromStart = Matrix4.Invert(FG.GLViewMatrix);

}

//get the mouse data from the event.

Connexion_T.X -= spwEvent.spwData.mData[0];

Connexion_T.Y -= spwEvent.spwData.mData[1];

Connexion_T.Z += spwEvent.spwData.mData[2]; // zooming

float speed = 0.001f;

Vector3 txy_world = Vector3.Transform(new Vector3(Connexion_T.X, Connexion_T.Y, 0), Connexion_CameraToWorldMatrixFromStart) * speed;

Vector3 tz_world = Vector3.Transform(new Vector3(0, 0, Connexion_T.Z), Connexion_CameraToWorldMatrixFromStart) * speed * 5;

//set UP, EYE, TARGET

FG.Eye = Connexion_Eye_Start + txy_world + tz_world;// move the eye XY and zoom Z

FG.Target = Connexion_Target_Start + txy_world;// move the target

//FG.UP ???

SetPerspectiveandView(); //update 3d view.

glControl1.Invalidate();

}

else if (spwEvent.type.Equals(SiApp.SiEventType.SI_ZERO_EVENT))

{

//ends movement.

MouseMoveConnexionStarted = false;

}

### Re: Combining translation and rotation into correct matrices. CAD package

It looks like you have the general idea of converting the vectors to the coordinate space they need to be applied in.

To rotate the coordinate system you should make a matrix out of the rotation vector.

Our C/C++ samples include code to do that. I see our C# samples doesn't export that function.

You can do this yourself. The 3 rotation values are the components of the rotation axis in eye space. The length is the amount of twist about that axis (larger vector, user is twisting harder).

The math library you are using probably has a function to convert a rotation axis to a matrix (glRotatef?). If nothing else, the values are very small and don't suffer terribly by just [M] = [X][Y][Z]. As long as you use that small matrix to accumulate in a proper matrix (your camera matrix), you should stay clear of gimbal issues. Do not accumulate those rotation components on their own. It will quickly get useless. You can accumulate translations, not rotations as a vector. There are other methods, but if you using OpenGL, a matrix is most convenient.

We have a new SDK that should do much of this for you. Take a look at it, and the included samples.

The most difficult problem with this is knowing what space the numbers are in. Often times, the names don't match that math. E.g., a CameraToWorld, CameraInWorld matrix may not be exactly what it is. It's quite common to find a silent minus sign thrown in, a vector inverted, other funny business. I always verify the values by looking at the numbers, making a change, aha, yes that vector is correct, no that vector is not right, … You do have to understand what is going on. It is rare that it all just magically works correctly.

I'll also warn you that if you paralleling the 2D mouse code in the application, the math in that code may not deal nicely with the viewing math. 2D mice don't have pure 3D/6DOF input like a 3D mouse have. It is immediately starting off with a non-intuitive transformation from the 2D device space to the viewing space. That code is more complicated than required for the 3D mouse.

A matrix (as used here) represents/describes/calculates the difference between two rigid coordinate systems. It is a formula for transforming a point/vector from one coordinate system to the other (in one direction). The inverse is the transformation the other direction. "Coordinate systems are your friends."

To rotate the coordinate system you should make a matrix out of the rotation vector.

Our C/C++ samples include code to do that. I see our C# samples doesn't export that function.

You can do this yourself. The 3 rotation values are the components of the rotation axis in eye space. The length is the amount of twist about that axis (larger vector, user is twisting harder).

The math library you are using probably has a function to convert a rotation axis to a matrix (glRotatef?). If nothing else, the values are very small and don't suffer terribly by just [M] = [X][Y][Z]. As long as you use that small matrix to accumulate in a proper matrix (your camera matrix), you should stay clear of gimbal issues. Do not accumulate those rotation components on their own. It will quickly get useless. You can accumulate translations, not rotations as a vector. There are other methods, but if you using OpenGL, a matrix is most convenient.

We have a new SDK that should do much of this for you. Take a look at it, and the included samples.

The most difficult problem with this is knowing what space the numbers are in. Often times, the names don't match that math. E.g., a CameraToWorld, CameraInWorld matrix may not be exactly what it is. It's quite common to find a silent minus sign thrown in, a vector inverted, other funny business. I always verify the values by looking at the numbers, making a change, aha, yes that vector is correct, no that vector is not right, … You do have to understand what is going on. It is rare that it all just magically works correctly.

I'll also warn you that if you paralleling the 2D mouse code in the application, the math in that code may not deal nicely with the viewing math. 2D mice don't have pure 3D/6DOF input like a 3D mouse have. It is immediately starting off with a non-intuitive transformation from the 2D device space to the viewing space. That code is more complicated than required for the 3D mouse.

A matrix (as used here) represents/describes/calculates the difference between two rigid coordinate systems. It is a formula for transforming a point/vector from one coordinate system to the other (in one direction). The inverse is the transformation the other direction. "Coordinate systems are your friends."

### Re: Combining translation and rotation into correct matrices. CAD package

Thanks for your detailed answer. You raised me some more questions

1. The accumulation trick you mention, each 'event' one could then do [M] = [X][Y][Z], but then multiply with the previous accumulated [M]'s again? So that would mean in the end: [X1][Y1][Z1][X2][Y2][Z2][X3][Y3][Z3] etc. ? which is then applied once globally at every frame refresh?

2. Is it possible with this method to do all 6 DOF's at the same time? Or is it better to pick either translation or rotation depending on.. ?

3. I checked the new SDK quickly but i felt this does all the handling completely? I have my code that handles most of the 2D Mouse interactions which is as you describe a lot of trickery to get this to work correctly. Hence i would rather keep that in our control to avoid extra conversion mistakes.

4. What is the best way to handle 2dMouse / 3dMouse movement that happens at the same time? Is it normal to block one or the other?

1. The accumulation trick you mention, each 'event' one could then do [M] = [X][Y][Z], but then multiply with the previous accumulated [M]'s again? So that would mean in the end: [X1][Y1][Z1][X2][Y2][Z2][X3][Y3][Z3] etc. ? which is then applied once globally at every frame refresh?

2. Is it possible with this method to do all 6 DOF's at the same time? Or is it better to pick either translation or rotation depending on.. ?

3. I checked the new SDK quickly but i felt this does all the handling completely? I have my code that handles most of the 2D Mouse interactions which is as you describe a lot of trickery to get this to work correctly. Hence i would rather keep that in our control to avoid extra conversion mistakes.

4. What is the best way to handle 2dMouse / 3dMouse movement that happens at the same time? Is it normal to block one or the other?

### Re: Combining translation and rotation into correct matrices. CAD package

Thanks for your detailed answer. You raised me some more questions

1. The accumulation trick you mention, each 'event' one could then do [M] = [X][Y][Z], but then multiply with the previous accumulated [M]'s again? So that would mean in the end: [X1][Y1][Z1][X2][Y2][Z2][X3][Y3][Z3] etc. ? which is then applied once globally at every frame refresh?

2. Is it possible with this method to do all 6 DOF's at the same time? Or is it better to pick either translation or rotation depending on.. ?

3. I checked the new SDK quickly but i felt this does all the handling completely? I have my code that handles most of the 2D Mouse interactions which is as you describe a lot of trickery to get this to work correctly. Hence i would rather keep that in our control to avoid extra conversion mistakes.

4. What is the best way to handle 2dMouse / 3dMouse movement that happens at the same time? Is it normal to block one or the other?

1. The accumulation trick you mention, each 'event' one could then do [M] = [X][Y][Z], but then multiply with the previous accumulated [M]'s again? So that would mean in the end: [X1][Y1][Z1][X2][Y2][Z2][X3][Y3][Z3] etc. ? which is then applied once globally at every frame refresh?

2. Is it possible with this method to do all 6 DOF's at the same time? Or is it better to pick either translation or rotation depending on.. ?

3. I checked the new SDK quickly but i felt this does all the handling completely? I have my code that handles most of the 2D Mouse interactions which is as you describe a lot of trickery to get this to work correctly. Hence i would rather keep that in our control to avoid extra conversion mistakes.

4. What is the best way to handle 2dMouse / 3dMouse movement that happens at the same time? Is it normal to block one or the other?

### Re: Combining translation and rotation into correct matrices. CAD package

I'd call it more like:

forever {

; given new force & torque components from the 3D mouse

[M'] = [Rx'][Ry'][Rz'][X'][Y'][Z'] ; if M is a 4x4 you should be able to do translations and rotations together.

[M] = [M][M'] ; make sure you are accumulating in a matrix, not in the individual device components. (Not [X] = [X][X'], …)

}

Pre- vs post-multiplication depends on your graphics library.

I wrote a book on this math. I should try to dig it up.

If [M] is the truth, I'd try my best to have the 2D mouse also update it and have only one representation of the view/camera <-> world relationship. I treat a 2D mouse as a 3D mouse with only 2 DOFs.

E.g.,

…

[M] = [M][2d mouse M']

The 3D mouse is the easiest to deal with because it gives you exactly what you need. A pure camera movement. The 2D mouse has to be told which parts of that movement it is controlling, and how it is doing it.

You can trace backwards from the matrix that is passed to the graphics library. #1 make sure that is correct, then go backwards toward the devices making sure that all transformations made are correct and make sense. This is the hardest part but once all the unexplained "-" signs are removed, the code will be much easier to work with in the future.

forever {

; given new force & torque components from the 3D mouse

[M'] = [Rx'][Ry'][Rz'][X'][Y'][Z'] ; if M is a 4x4 you should be able to do translations and rotations together.

[M] = [M][M'] ; make sure you are accumulating in a matrix, not in the individual device components. (Not [X] = [X][X'], …)

}

Pre- vs post-multiplication depends on your graphics library.

I wrote a book on this math. I should try to dig it up.

If [M] is the truth, I'd try my best to have the 2D mouse also update it and have only one representation of the view/camera <-> world relationship. I treat a 2D mouse as a 3D mouse with only 2 DOFs.

E.g.,

…

[M] = [M][2d mouse M']

The 3D mouse is the easiest to deal with because it gives you exactly what you need. A pure camera movement. The 2D mouse has to be told which parts of that movement it is controlling, and how it is doing it.

You can trace backwards from the matrix that is passed to the graphics library. #1 make sure that is correct, then go backwards toward the devices making sure that all transformations made are correct and make sense. This is the hardest part but once all the unexplained "-" signs are removed, the code will be much easier to work with in the future.

### Re: Combining translation and rotation into correct matrices. CAD package

I have some WPF code which uses QuaternionRotation3D from System.Windows.Media.Media3D.

I can email the entire sample to you if you like.

I can email the entire sample to you if you like.

Code: Select all

```
public void UpdateCube(Vector3D tv, Vector3D rv)
{
ModelVisual3D mv = this.theViewport3D.Children[1] as ModelVisual3D;
Transform3DGroup t3dg = mv.Transform as Transform3DGroup;
RotateTransform3D _GroupRotateTransform = t3dg.Children[1] as RotateTransform3D;
TranslateTransform3D _GroupTranslateTransform = t3dg.Children[2] as TranslateTransform3D;
if (rv != null)
{
Vector3D axisOfRotation = new Vector3D(rv.X, rv.Y, rv.Z);
double amountOfRotation = axisOfRotation.Length;
if (amountOfRotation > 0)
{
axisOfRotation.Normalize();
currentOrientation = new Quaternion(axisOfRotation, amountOfRotation / 20.0) * currentOrientation;
_GroupRotateTransform.Rotation = new QuaternionRotation3D(currentOrientation);
}
}
if (tv != null)
{
_GroupTranslateTransform.OffsetX += (double)tv.X / 1000.0;
_GroupTranslateTransform.OffsetY += (double)tv.Y / 1000.0;
_GroupTranslateTransform.OffsetZ += (double)tv.Z / 1000.0;
}
}
```

### Re: Combining translation and rotation into correct matrices. CAD package

Thanks for this example. Yes please email me the entire sample. info[at]formware[dot]co (not .com)

I've continued on it as of yesterday and studied my previous older code again.

My main challenge/problem is as follows.

Our entire code base is build around the following order when 'recalculating perspective' for a view:

1. Determine EYE/TARGET/UP vectors from mouse movement (pitch/yaw)

2. Use a LookAt library function to generate the View matrix from EYE/TARGET

The reason for this order is I wanted very fine grained control over 2D mouse input. Pitch and yawn movements have specific calculations and are not simply a sphere. Secondly there is things like 'zoom to mouse'. Doing these calculations was a lot easier on the vectors/points in world space.

Now the problem is as follows.

1. The sample and your answers want me to directly manipulate the View matrix. But then nothing happens to the EYE/TARGET/UP vector that are used in all other viewing code. How are they supposed to be updated in this example?

2. with the translations i was able to convert them to the world coordinates en then apply to EYE/TARGET/UP (see sample above).

How does this apply to the rotations that are around the TARGET point. -> which matrix should be updated then in my case?

I've continued on it as of yesterday and studied my previous older code again.

My main challenge/problem is as follows.

Our entire code base is build around the following order when 'recalculating perspective' for a view:

1. Determine EYE/TARGET/UP vectors from mouse movement (pitch/yaw)

2. Use a LookAt library function to generate the View matrix from EYE/TARGET

The reason for this order is I wanted very fine grained control over 2D mouse input. Pitch and yawn movements have specific calculations and are not simply a sphere. Secondly there is things like 'zoom to mouse'. Doing these calculations was a lot easier on the vectors/points in world space.

Now the problem is as follows.

1. The sample and your answers want me to directly manipulate the View matrix. But then nothing happens to the EYE/TARGET/UP vector that are used in all other viewing code. How are they supposed to be updated in this example?

2. with the translations i was able to convert them to the world coordinates en then apply to EYE/TARGET/UP (see sample above).

How does this apply to the rotations that are around the TARGET point. -> which matrix should be updated then in my case?

### Re: Combining translation and rotation into correct matrices. CAD package

Email sent.

The demo uses Media3D classes to do the math. It doesn't use matrices. I included a C# version of our C/C++ function SPW_ArbitraryAxisToMatrix. But in this case, it isn't used. The Media3D classes seem to work fine, though I'm not sure I completely understand what they are representing. As usual, the documentation is not as exact as I like to see it. I have some taboo minus signs in there to get it to change the matrix correctly. It might be transposed.

Your approach sounds fine. After the 2D mouse math, you end up with a matrix. The 3D mouse math just starts with that matrix, modifies it and gives you the new one.

The issue is whether you can inverse those calculations with your 2D mouse code. That is, where is the view state being kept? If it's a one way street from 2D mouse to view matrix (IOW, you always overwrite the entire view matrix), the state is probably being kept in the 2D mouse code. That won't work. You'd change the view matrix with the 3D mouse but the 2D mouse would just overwrite it as soon as the 2D mouse is moved. If you can't inversely calculate the 2D mouse parameters from the view matrix, then I'd try to have the 2D mouse code create a delta matrix -- just the amount the camera needs to move, not the entire new location/orientation of the camera. Then you can append that delta matrix to the view matrix. And the state will be kept in the view matrix.

It sounds like the Eye/Target/Up is the most likely place to save the state.

A view matrix consists of the up vector, eye vector, etc. That's what those numbers are. If you ever need them, they are right there.

As you found, sometimes it is much better to work in one space or the other. You can use the view matrix and its inverse to convert a vector back and forth between camera space and world space as needed. After changing it, convert it back to the space you need to apply it. Its very helpful for controlling the camera. E.g., to know if the user is trying to move the camera below ground level, you want the camera position in world coordinates (ground is defined in world coords). If you wanted to apply the rotation vector from the 3D mouse, it arrives in camera coords. Convert it to world coordinates if you want to apply it there.

I use what I find to be a useful naming system for transforms. I never use the term view matrix in my code. It is not consistently well-defined in every API. I use for example, EyeToWorld and EyeInWorld which are the same thing. They represent a matrix that transforms a vector from Eye space To World space. The alternate name is useful because the numbers in the matrix show the position/orientation of the Eye In the World space. Same matrix, different names. Sometimes it is easier to use one name or the other depending on what you are doing with it. They are aliases.

And vice-versa. The inverse of EyeToWorld (EyeInWorld - different name for the same matrix), is WorldToEye (WorldInEye). It represent the transform from World space To Eye space (and the position of the World space In the Eye space). You get it.

The demo shows these numbers so you can see what is happening.

The demo uses Media3D classes to do the math. It doesn't use matrices. I included a C# version of our C/C++ function SPW_ArbitraryAxisToMatrix. But in this case, it isn't used. The Media3D classes seem to work fine, though I'm not sure I completely understand what they are representing. As usual, the documentation is not as exact as I like to see it. I have some taboo minus signs in there to get it to change the matrix correctly. It might be transposed.

Your approach sounds fine. After the 2D mouse math, you end up with a matrix. The 3D mouse math just starts with that matrix, modifies it and gives you the new one.

The issue is whether you can inverse those calculations with your 2D mouse code. That is, where is the view state being kept? If it's a one way street from 2D mouse to view matrix (IOW, you always overwrite the entire view matrix), the state is probably being kept in the 2D mouse code. That won't work. You'd change the view matrix with the 3D mouse but the 2D mouse would just overwrite it as soon as the 2D mouse is moved. If you can't inversely calculate the 2D mouse parameters from the view matrix, then I'd try to have the 2D mouse code create a delta matrix -- just the amount the camera needs to move, not the entire new location/orientation of the camera. Then you can append that delta matrix to the view matrix. And the state will be kept in the view matrix.

It sounds like the Eye/Target/Up is the most likely place to save the state.

A view matrix consists of the up vector, eye vector, etc. That's what those numbers are. If you ever need them, they are right there.

As you found, sometimes it is much better to work in one space or the other. You can use the view matrix and its inverse to convert a vector back and forth between camera space and world space as needed. After changing it, convert it back to the space you need to apply it. Its very helpful for controlling the camera. E.g., to know if the user is trying to move the camera below ground level, you want the camera position in world coordinates (ground is defined in world coords). If you wanted to apply the rotation vector from the 3D mouse, it arrives in camera coords. Convert it to world coordinates if you want to apply it there.

I use what I find to be a useful naming system for transforms. I never use the term view matrix in my code. It is not consistently well-defined in every API. I use for example, EyeToWorld and EyeInWorld which are the same thing. They represent a matrix that transforms a vector from Eye space To World space. The alternate name is useful because the numbers in the matrix show the position/orientation of the Eye In the World space. Same matrix, different names. Sometimes it is easier to use one name or the other depending on what you are doing with it. They are aliases.

And vice-versa. The inverse of EyeToWorld (EyeInWorld - different name for the same matrix), is WorldToEye (WorldInEye). It represent the transform from World space To Eye space (and the position of the World space In the Eye space). You get it.

The demo shows these numbers so you can see what is happening.

### Re: Combining translation and rotation into correct matrices. CAD package

After another day of work learned a lot but still don't have a good working solution.

I scanned my way through the V3.4 and V4 SDK's but I can't find any 3d math example code handling the view.

From the documentation I learn so much as that I probably have the 'target camera mode'

Implemented the translations and they work in this order during the move event:

1. Multiply the delta_t_cam (mouse data) with the accumulating WorldToCam

2. Read back EYE/UP from the accumulating WorldToCam.

3. Get TARGET from the previous distance EYE->TARGET (doesn't change when rotating or translating)

4. redraw view based upon EYE/UP/TARGET

So this gives me a 3d view with all 3 translations working nicely from all angles. As if you are moving the camera system.

Next step is rotation which goes completely wrong.

questions:

1. I think i have the case of a "target camera mode" (naming from the SDK docs). All should rotate around a TARGET. Is that correct?

2. The rotations don't work at all. It should rotate around the TARGET.

Assuming I'm in target camera mode, would I require a T*R*T matrix with extra T translations from EYE to TARGET?

3. What would the order of rotations/translations then be?

I'm using OpenTK/OpenGL.

I scanned my way through the V3.4 and V4 SDK's but I can't find any 3d math example code handling the view.

From the documentation I learn so much as that I probably have the 'target camera mode'

Implemented the translations and they work in this order during the move event:

1. Multiply the delta_t_cam (mouse data) with the accumulating WorldToCam

2. Read back EYE/UP from the accumulating WorldToCam.

3. Get TARGET from the previous distance EYE->TARGET (doesn't change when rotating or translating)

4. redraw view based upon EYE/UP/TARGET

So this gives me a 3d view with all 3 translations working nicely from all angles. As if you are moving the camera system.

Next step is rotation which goes completely wrong.

questions:

1. I think i have the case of a "target camera mode" (naming from the SDK docs). All should rotate around a TARGET. Is that correct?

2. The rotations don't work at all. It should rotate around the TARGET.

Assuming I'm in target camera mode, would I require a T*R*T matrix with extra T translations from EYE to TARGET?

3. What would the order of rotations/translations then be?

I'm using OpenTK/OpenGL.

### Re: Combining translation and rotation into correct matrices. CAD package

It's impossible to give you exact recommendations without having all the code.

I can only give you guidelines.

1) Work with only 1 DOF at a time. Once that is working add more.

2) Work with constant values at first. Don't let the variable nature of the input confuse things.

3) Get translations working, then apply a single 90 rotation and see if it still works.

4) A target camera mode generally is not 6DOF interaction. You are constrained to look at the target, not away. So Ry and Rx wouldn't do anything. It's up to you whether you want to have them contribute to panning the camera around the target point.

5) I've always done target camera / orbit mode by translating the camera to the target point, rotating about the vertical axis (new longitude), then the horizontal axis (latitude), then translated out along Z to the new camera position (like a satellite in the sky).

6) Most importantly, print out the numbers. See if they are changing the way you expect. You don't even need to look the display.

7) If the changes don't look right, after you get the math right, I'd examine how the 2D mouse interface you are using is modifying the numbers. That will help you reverse engineer what is happening downstream.

8) What could be happening downstream? E.g., if you've told the library you are feeding that the camera should not rotate, they may very well be removing rotations behind your back.

We should add helper functions for some of these popular libraries.

I can only give you guidelines.

1) Work with only 1 DOF at a time. Once that is working add more.

2) Work with constant values at first. Don't let the variable nature of the input confuse things.

3) Get translations working, then apply a single 90 rotation and see if it still works.

4) A target camera mode generally is not 6DOF interaction. You are constrained to look at the target, not away. So Ry and Rx wouldn't do anything. It's up to you whether you want to have them contribute to panning the camera around the target point.

5) I've always done target camera / orbit mode by translating the camera to the target point, rotating about the vertical axis (new longitude), then the horizontal axis (latitude), then translated out along Z to the new camera position (like a satellite in the sky).

6) Most importantly, print out the numbers. See if they are changing the way you expect. You don't even need to look the display.

7) If the changes don't look right, after you get the math right, I'd examine how the 2D mouse interface you are using is modifying the numbers. That will help you reverse engineer what is happening downstream.

8) What could be happening downstream? E.g., if you've told the library you are feeding that the camera should not rotate, they may very well be removing rotations behind your back.

We should add helper functions for some of these popular libraries.

### Re: Combining translation and rotation into correct matrices. CAD package

Hi formware.

We've suggested some of our partners to use the newer SDK, version 4.0. This SDK takes a new approach on how to integrate 3D mouse support: instead of having to do all the maths from raw device data, the client application handles what we call "properties". The driver can then using the "properties" to control the client's 3D view. It also works with 2D visualisation (or "pan and zoom").

Since you're using OpenTK, perhaps we can assist you in creating a conversion between the driver's "view affine" transformation matrix (a SDK 4 concept) and the view parameters (camera eye, target, up vector) used in your program.

It's unlikely that your program has a "target camera mode". A "target" is something the user is looking, it's visible and the user can pick it up. Think of it as an "anchor" of sorts. What you are probably using is a point to define the view direction from the camera's eye position. Confusingly, this point is also called "target".

Interesting that you have look through SDK versions 3.x and 4, the latter being in beta. What made you pick the older SDK?

We've suggested some of our partners to use the newer SDK, version 4.0. This SDK takes a new approach on how to integrate 3D mouse support: instead of having to do all the maths from raw device data, the client application handles what we call "properties". The driver can then using the "properties" to control the client's 3D view. It also works with 2D visualisation (or "pan and zoom").

Since you're using OpenTK, perhaps we can assist you in creating a conversion between the driver's "view affine" transformation matrix (a SDK 4 concept) and the view parameters (camera eye, target, up vector) used in your program.

It's unlikely that your program has a "target camera mode". A "target" is something the user is looking, it's visible and the user can pick it up. Think of it as an "anchor" of sorts. What you are probably using is a point to define the view direction from the camera's eye position. Confusingly, this point is also called "target".

Nuno Gomes

### Re: Combining translation and rotation into correct matrices. CAD package

Hi ngomes,

Thanks for the reply. This is gonna be a long post. But hopefully also for you some feedback.

Your answers are highly appreciated.

I picked the V3.4 because:

- when i run the C# V4.0 example it shows me only exceptions. The getters/setters of the camera point and some hit methods. So without going into this application i could not see it work/compare.

- It seems that the documentation is limited to description of class descriptions. I'm missing sort of the general guide here?

- I rather implement a minimal solution without to much stuff i don't need. (personal preference)

- I got that it factors away all mathematics; but i rather have control over it and understand what is happening. That is why my initial approach was to go with V3.4. (personal preference)

- we have plans to make a multi OS version. So i rather have a minimal version with some C++ bindings than a larger C# library that might not work at other OS's. Not sure if this is a valid argument though..

Status:

I somehow got it working now but still it doesn't feel the same as other software I tried and the demo applications. It looks to me as each CAD package does it's own thing and there are little difference everywhere.

With the help of Jwick I figured out a couple of important math rules I hadn't fresh anymore:

1. premultiply the Cameramatrix means you are working in the camera system.

2. postmultiply means you are working in the world system (which is the matrix behind the camera)

When translating around a center or rotation:

3. when in the camera system you need to translate by coordinates in the camera system. So TARGET-EYE just becomes a negative Z vector.

4. when in the world system you need to translate by the center of rotation i world coordinates.

Getting back EYE/TARGET/UP:

5. EYE/UP/LEFT/DIR follow from the multiplied camera matrix inverted.

6. TARGET follows from remembering the length of which point you were looking at and adding that to the new EYE in the new DIR.

The actual code, somehow it seems it's backwards to my mathematics. (i.e. I have to multiply left when my math says right). Maybe i'm forgetting something about matrix ordering in opengl/opentk. But at least it's consistent now and i know where to insert what matrix.

Question: which mode? mappings correct?

The hard part for now is to figure out which axis (world or camera) the rotations should run on to match it to what my users expect.

There is a copy from your old documentation below.

Currently i'm thinking I have Target Camera Mode but with a center or rotation at world [0,0,0]. I don't have object mode (as my directions are inversed and i'm not rotating around my EYE point but rather around the center of the world or another target point.) I tried other CoR's as my normal 2d mouse movement is working but this doesn't seem in line with some other software i checked.

This implies these mappings:

1. Translations in camera axis system. These all work also after rotating. I do wonder if i need some multiplier for the distance EYE-TARGET.. ?

2. Rotation Mouse Y axis I tried various but this maps best to rotating the world Z axis around [0,0,0] (camera translated to [0,0,0])

3. Rotating Mouse X axis (forward/backwards) -> this maps to rotating the camera system X axis translated to [0,0,0].

4. Rotating Mouse Z axis (left/right) -> I'm rotating this around the camera Z axis. But it feels off. I'm wondering if this should be optionally blocked as it messes up your navigation a lot?

Does this sound familiar?

Object Mode

The main characteristic of object mode navigation is that the user has the impression he is holding the object in his hand. An important use for this navigation mode is in the modeling and inspection of parts and assemblies. To create this illusion for the user, the direction that the object moves needs to be the same as the direction the user moves his hand, which is moving the devices cap. It is also important that the center of rotation is fixed relative to the object. A consequence of this mode is that the pan speed needs to be adjusted depending how far the object is from the user (see chapter 5).

Camera Mode

Camera mode navigation is characterized by the user having the impression that he is moving around in the scene he is observing. A typical use for a camera mode is exploring virtual sceneries or in first person games. This requires that the user moves and turns in the direction that the cap on the 3D Mouse moves, and causes the objects displayed to move in the opposite direction to object mode described above. In camera mode the center of rotation is at the eye or view point. Because camera mode navigation reflects movement in the real world, there are a number of sub modes which have various constraints similar to those existing in the real world.

Target Camera Mode

Target camera mode, for want of a better name, moves the object or scene in the same direction as in camera mode, but uses the object mode center of rotation algorithms. In other words target camera mode is the same as object mode, but pans, zooms and rotates in the opposite direction.

Thanks for the reply. This is gonna be a long post. But hopefully also for you some feedback.

Your answers are highly appreciated.

I picked the V3.4 because:

- when i run the C# V4.0 example it shows me only exceptions. The getters/setters of the camera point and some hit methods. So without going into this application i could not see it work/compare.

- It seems that the documentation is limited to description of class descriptions. I'm missing sort of the general guide here?

- I rather implement a minimal solution without to much stuff i don't need. (personal preference)

- I got that it factors away all mathematics; but i rather have control over it and understand what is happening. That is why my initial approach was to go with V3.4. (personal preference)

- we have plans to make a multi OS version. So i rather have a minimal version with some C++ bindings than a larger C# library that might not work at other OS's. Not sure if this is a valid argument though..

Status:

I somehow got it working now but still it doesn't feel the same as other software I tried and the demo applications. It looks to me as each CAD package does it's own thing and there are little difference everywhere.

With the help of Jwick I figured out a couple of important math rules I hadn't fresh anymore:

1. premultiply the Cameramatrix means you are working in the camera system.

2. postmultiply means you are working in the world system (which is the matrix behind the camera)

When translating around a center or rotation:

3. when in the camera system you need to translate by coordinates in the camera system. So TARGET-EYE just becomes a negative Z vector.

4. when in the world system you need to translate by the center of rotation i world coordinates.

Getting back EYE/TARGET/UP:

5. EYE/UP/LEFT/DIR follow from the multiplied camera matrix inverted.

6. TARGET follows from remembering the length of which point you were looking at and adding that to the new EYE in the new DIR.

The actual code, somehow it seems it's backwards to my mathematics. (i.e. I have to multiply left when my math says right). Maybe i'm forgetting something about matrix ordering in opengl/opentk. But at least it's consistent now and i know where to insert what matrix.

Question: which mode? mappings correct?

The hard part for now is to figure out which axis (world or camera) the rotations should run on to match it to what my users expect.

There is a copy from your old documentation below.

Currently i'm thinking I have Target Camera Mode but with a center or rotation at world [0,0,0]. I don't have object mode (as my directions are inversed and i'm not rotating around my EYE point but rather around the center of the world or another target point.) I tried other CoR's as my normal 2d mouse movement is working but this doesn't seem in line with some other software i checked.

This implies these mappings:

1. Translations in camera axis system. These all work also after rotating. I do wonder if i need some multiplier for the distance EYE-TARGET.. ?

2. Rotation Mouse Y axis I tried various but this maps best to rotating the world Z axis around [0,0,0] (camera translated to [0,0,0])

3. Rotating Mouse X axis (forward/backwards) -> this maps to rotating the camera system X axis translated to [0,0,0].

4. Rotating Mouse Z axis (left/right) -> I'm rotating this around the camera Z axis. But it feels off. I'm wondering if this should be optionally blocked as it messes up your navigation a lot?

Does this sound familiar?

Object Mode

The main characteristic of object mode navigation is that the user has the impression he is holding the object in his hand. An important use for this navigation mode is in the modeling and inspection of parts and assemblies. To create this illusion for the user, the direction that the object moves needs to be the same as the direction the user moves his hand, which is moving the devices cap. It is also important that the center of rotation is fixed relative to the object. A consequence of this mode is that the pan speed needs to be adjusted depending how far the object is from the user (see chapter 5).

Camera Mode

Camera mode navigation is characterized by the user having the impression that he is moving around in the scene he is observing. A typical use for a camera mode is exploring virtual sceneries or in first person games. This requires that the user moves and turns in the direction that the cap on the 3D Mouse moves, and causes the objects displayed to move in the opposite direction to object mode described above. In camera mode the center of rotation is at the eye or view point. Because camera mode navigation reflects movement in the real world, there are a number of sub modes which have various constraints similar to those existing in the real world.

Target Camera Mode

Target camera mode, for want of a better name, moves the object or scene in the same direction as in camera mode, but uses the object mode center of rotation algorithms. In other words target camera mode is the same as object mode, but pans, zooms and rotates in the opposite direction.

### Re: Combining translation and rotation into correct matrices. CAD package

Hi formware,

There's quite a bit there, yes. Here's a first question, though.

There's quite a bit there, yes. Here's a first question, though.

Did you find the "quick_guide.pdf" document? We're tried to word it as an introductory document, a "first-steps" guide.

### Re: Combining translation and rotation into correct matrices. CAD package

Here's a suggestion for you to consider, formware.

Since our samples were not helpful to you, perhaps we can take a OpenTK sample or some other program and modify it to demonstrate how to integrate our devices?

Is there a project you can suggest? Perhaps you can even clobber one together that is simply enough yet useful for illustration purposes both to you and others using the same framework.

Since our samples were not helpful to you, perhaps we can take a OpenTK sample or some other program and modify it to demonstrate how to integrate our devices?

Is there a project you can suggest? Perhaps you can even clobber one together that is simply enough yet useful for illustration purposes both to you and others using the same framework.

### Re: Combining translation and rotation into correct matrices. CAD package

Hi,

Yes i found that document. Also understand it.

But when i look in teh C# sample application i see a lot more classes, interfaces and views. In the "SpaceMouse" folder there are various CallBack files instead of 1 class.

It's unclear to me which gets/sets do i need at a minimum for my application?

Another thing that is unclear is what navigation mode this V4.0 operates in? Or how can that be steered?

Ideally you would want to give your user the option: camera target mode vs object mode. Most CAD packages seem to work in either of these 2.

I'm going to give it a shot V4... see how far I get and how it compares to V3.4

Yes i found that document. Also understand it.

But when i look in teh C# sample application i see a lot more classes, interfaces and views. In the "SpaceMouse" folder there are various CallBack files instead of 1 class.

It's unclear to me which gets/sets do i need at a minimum for my application?

Another thing that is unclear is what navigation mode this V4.0 operates in? Or how can that be steered?

Ideally you would want to give your user the option: camera target mode vs object mode. Most CAD packages seem to work in either of these 2.

I'm going to give it a shot V4... see how far I get and how it compares to V3.4