## Hot questions for Using Lightweight Java Game Library in camera

Question:

I have got some trees, which are greatly lagging the game, so I would like to check if the trees are in front of the camera or not.

I have had some help from the Mathematics forum, and also had a look at This link to help me convert pitch/yaw to the directional vector needed.

But for some reason, whenever I move the camera to the left, the trees become visible, wheras whenever I move it to the right, they become unvisible (So if camera is pointing at +1 on the Z axis, it seems to be rendering the trees, but -1 on the Z axis and it seems to not render them). (See http://i.gyazo.com/cdd05dc3f5dbdc07577c6e41fab3a549 for a less-jumpy .mp4)

I am using the following code to check if an object is in front of the camera or not:

```Ship you = shipsID.get(UID);
int dis = 300;
Vector3f X = new Vector3f(camera.x(), camera.y(), camera.z());
Vector3f V = new Vector3f(x, y, z);

for (Tree tree : trees){
Vector3f Y = new Vector3f(tree.location.x, tree.location.y, tree.location.z);
Vector3f YMinusX = Y.negate(X);//new Vector3f(Y.x - X.x, Y.y - X.y, Y.z - X.z);
float dot = Vector3f.dot(YMinusX, V);
if (dot > 0){
tree.render();
}
}
```

Is anyone able to tell me what I have done wrong here? I can't work out if it's the math.. Or the code.. Or what?

Camera translation code:

``` public void applyTranslations() {
glPushAttrib(GL_TRANSFORM_BIT);
glMatrixMode(GL_MODELVIEW);
glRotatef(pitch, 1, 0, 0);
glRotatef(yaw, 0, 1, 0);
lastYaw = yaw;
glRotatef(roll, 0, 0, 1);
glTranslatef(-x, -y, -z);
glPopAttrib();
}
```

UPDATE:

It appears to be where the camera is looking. For example, if I look to -Z, nothing happens, but if I look to +Z, they all render. The `if (dot > 0) code` appears to somehow being +Z rather than +TheCameraRotation.

Your camera rotations yaw around Y, implying Y is your up vector. However, `float z = (float) Math.sin(Math.toRadians(camera.pitch()));` gives `Z` for your up vector. There is an inconsistency. I'd start by swapping `y` and `z` here, then print everything out every frame so you can see what happens as you rotate the camera. Also render just one tree and print `dot`. E.g. you might quickly notice the numbers approach 1.0 only when you look at 90 degrees left of the tree which narrows down the problem. As @DWilches notes, swapping cos/sin will change the phase of the rotation, which would produce such an effect.

You might consider limiting the dot product to the camera's field of view. There are still problems in that trees are not just points. A better way would be to test tree bounding boxes against the camera frustum, as @glampert suggests.

Still, the tree geometry doesn't look that complex. Optimization wise, I'd start trying to draw them faster. Are you using VBOs? Perhaps look at methods to reduce draw calls such as instancing. Perhaps even use a few models for LOD or billboards. Going even further, billboards with multiple trees on them. Occlusion culling methods could be used to ignore trees behind mountains.

[EDIT] Since your trees are all roughly on a plane, you could limit the problem to the camera's yaw:

```float angleToTree = Math.atan2(tree.location.z - camera.z(), tree.location.x - camera.x());
float angleDiff = angleToTree - camera.yaw();
if (angleDiff > Math.PI)
angleDiff -= 2.0f * Math.PI;
if (angleDiff < -Math.PI)
angleDiff += 2.0f * Math.PI;
if (abs(angleDiff) < cameraFOV + 0.1f) //bias as trees are not points
tree.render();
```

Question:

I have a world that is rendered in 2D and I'm looking at it from the top. Tjat looks like this (the floor tiles have no texture and only random green color yet):

Before rendering my entities, I transform the model-view matrix like this (while `position` is the position and `zoom` the zoom of the camera, `ROTATION` is 45):

```glScalef(this.zoom, this.zoom, 1);
glTranslatef(this.position.x, this.position.y, 0);
glRotatef(ROTATION, 0, 0, 1);
```

Now I want to calculate the world coordinates for the current position of my camera. What I'm trying is to create a new matrix with `glPushMatrix`, then transform it the same way that the camera is transformed, and then get the matrix and multiply the given camera coordinate with it:

```private Vector2f toWorldCoordinates(Vector2f position) {

glPushMatrix();

// do the same as when rendering
glScalef(this.zoom, this.zoom, 1);
glTranslatef(this.position.x, this.position.y, 0);
glRotatef(ROTATION, 0, 0, 1);

// get the model-view matrix
ByteBuffer m = ByteBuffer.allocateDirect(64);
m.order(ByteOrder.nativeOrder());
glGetFloatv(GL_MODELVIEW_MATRIX, m);

// calculate transformed position
float x = (position.x * m.getFloat(0)) + (position.y * m.getFloat(4)) + m.getFloat(12);
float y = (position.x * m.getFloat(1)) + (position.y * m.getFloat(5)) + m.getFloat(13);
System.out.println(x + "/" + y);

glPopMatrix();
return new Vector2f(x, y);
}
```

The problem now is: this works for the `x` coordinate, but the `y` coordinate is wrong and always 0. Have I misused the matrix somehow? Is there a "smoother" way of getting the world coordinates from the eye coordinates?

The problem is with the way you're calling `getFloat()`. When you call it with an index on a `ByteBuffer`, the index is the number of bytes into the buffer at which to start reading the float, not the number of floats. You need to multiply each of your indices by 4:

```float x = (position.x * m.getFloat(0)) + (position.y * m.getFloat(16)) + m.getFloat(48);
float y = (position.x * m.getFloat(4)) + (position.y * m.getFloat(20)) + m.getFloat(52);
```

However given that x is working for you already, I suspect you might also need to transpose your matrix co-ordinates, and so the correct code is:

```float x = (position.x * m.getFloat(0)) + (position.y * m.getFloat(4)) + m.getFloat(12);
float y = (position.x * m.getFloat(16)) + (position.y * m.getFloat(20)) + m.getFloat(28);
```

(By a co-incidence, transposing the first row of the matrix into the first column gives indices that are 4 times as great, so the 2 bugs cancel each other out in the case of x but not y).

If you're looking for a smoother way of doing it, look into using gluUnProject, although you may have to apply some additional transforms (it maps from window to object co-ordinates).

Question:

I would like to have a billboard of a tree to always face the camera.

Currently, I am just using `glRotatef()` and rotating the tree's yaw to the camera's yaw:

```glRotatef(camera.yaw(), 0f, 1f, 0f);
```

However, that unfortunately does not work.

It almost seems like the tree is turning to the right, when it should be turning left.

I've already tried inverting the rotation, but that doesn't work.

``` glRotatef(-camera.yaw(), 0f, 1f, 0f);
OR
glRotatef(camera.yaw(), 0f, -1f, 0f);
```

I could always resort to doing a crossed billboard (like I do on my grass), however scaling that up it looks horrible. I would prefer to only use it as a last resort.

I could also use a 3D model as an alternative, however I find that much harder, and it also is far more intensive on the graphics card.

I've already tried looking here for an answer, but not only is that confusing, but it is also for flash and really doesn't even seem to even get close on telling how to do it for other languages.

If needed (for whatever reason), my entire rendering code is:

```public void render(){
glPushMatrix();
glDisable(GL_LIGHTING);
glTranslatef(location.x * TerrainDemo.scale, location.y, location.z * TerrainDemo.scale); //Scale is the size of the map: More players online = bigger map.
TexturedModel texturedModel = TerrainDemo.textModel;
RawModel model = texturedModel.getRawModel();
glDisable(GL_CULL_FACE);
GL30.glBindVertexArray(model.getVaoID());
GL20.glEnableVertexAttribArray(0);
GL20.glEnableVertexAttribArray(1);
GL13.glActiveTexture(GL13.GL_TEXTURE0);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, TerrainDemo.textModel.getTexture().getID());
glScalef(size.x, size.y, size.z);
glColor4f(0, 0, 0, 0.5f); //0,0,0, because of the shaders.
glRotatef(Main.TerrainDemo.camera.yaw(), 0f, 1f, 0f);
glDrawElements(GL_TRIANGLES, model.getVertexCount(), GL11.GL_UNSIGNED_INT, 0);

GL20.glDisableVertexAttribArray(0);
GL20.glDisableVertexAttribArray(1);
GL30.glBindVertexArray(0);
glEnable(GL_LIGHTING);
glPopMatrix();
}
```

`camera.yaw()`:

``` /** @return the yaw of the camera in degrees */
public float yaw() {
return yaw;
}
```

The yaw is in between 360 and -360.

``` /** Processes mouse input and converts it in to camera movement. */
public void processMouse() {
float mouseDX = Mouse.getDX() * 0.16f;
float mouseDY = Mouse.getDY() * 0.16f;

if (yaw + mouseDX >= 360) {
yaw = yaw + mouseDX - 360;
} else if (yaw + mouseDX < 0) {
yaw = 360 - yaw + mouseDX;
} else {
yaw += mouseDX/50;
}
//Removed code relevant to pitch, since it is not relevant to this question.
}
```

UPDATE: I have tried a lot of combinations, but the camera.yaw() does not seem to be remotely relevant to what the trees are doing? No matter what I times or divide or seem to do with it, it always seems to be wrong!

What you want is an axis aligned billboard. First take the center axis in local coordinates, let's call it a. Second you need the axis from the point of view to some point along that axis (the tree's base will do just fine), let's call it v. Given these two vectors you want to form a "tripod" with one leg being coplanar with the center axis and the direction to viewpoint.

This can be done by orthogonalizing the vector v against a using the Gram-Schmidt process, yielding v'. The third leg of the tripod is the cross product between a and v' yielding r = a × v'. The edges of the axis aligned billboard are parallel to a and r; but this is just another way of saying, that a billboard is rotated into the (a,r) plane, which is exactly what rotation matrices describe. Assume the untransformed billboard geometry is in the XY plane, with a parallel to Y, then the rotation matrix would be

[r, a, (0,0,1)]

or in a slightly more elaborate way of writing it

```| r.x , a.x , 0 |
| r.y , a.y , 0 |
| r.z , a.z , 1 |
```

To form a full 4×4 homogenous transformation matrix expand it to

```| r.x , a.x , 0 , t.x |
| r.y , a.y , 0 , t.y |
| r.z , a.z , 1 , t.z |
|  0  ,  0  , 0 ,  1  |
```

where t is the translation.

Note that if anything about matrixes and vector operations doesn't yet make sense to you, you have to stop anything you do with OpenGL right now, and first learn these essential basic skills. You will need them.

Question:

Up Vector in world space is just up or whatever you want to define as "up". Normally up is just a positive y:

```private const Vector3f UP = new Vector3f(0.0f, 1.0f, 0.0f);
```

Question:

I am experimenting with LWJGL2 and I want to be able to tell if the camera is able to see a certain point in 3D space. I was trying on my own to see if I could do it, and ended up with something that kinda works and only for rotation on the Y axis.

This code works, but not in both axes. I am not sure if this is the correct way to do it either.

```public boolean isInFrame(float x, float y, float z){ //The z isn't used

float camera = rotation.y; //The cameras y rotation
double object = Math.atan2(y, x)*(180/Math.PI);

object += (180 - camera);
if (object <0 ) object += 360;
if (object >360 ) object -= 360;

return 270>object&&90<object; //set to 180˚ for test

}
```

For the code, I am assuming the camera is centered around 0,0,0.

I just want to know how I could change this so that it works for x and y rotation of the camera. For example, it could tell me if if a point is visible regardless of the cameras rotation.

NOTE: I am not worrying about anything obstructing the view of the point.

Thanks for the help in advance.

If you have the view and projection matrices of the camera (let's call them `V`, `P`), you can just apply the transformations to your point and check whether the result lies within the clip volume of the camera.

Say your point is at `(x, y, z)`. Construct a vector `p = (x, y, z, 1)` and apply the camera transform to it:

```q = P * V * p
```

The view transform `V` applies the transformation of the world relative to the camera, based on the camera position and orientation. Then, the projection `P` deforms the camera's view frustum (i.e., the visible space of the camera) into a unit cube, like this:

(Image source: Song Ho Ahn)

In order to read off the coordinate values of the resulting point, we must first de-homogenize it by dividing by its `w` component:

```r = q / q.w
```

Now, the components `r.x`, `r.y`, `r.z` tell you whether the point lies within the visible range of the camera:

• If `r.x < -1`, the point lies beyond the left border of the screen.
• If `r.x > 1`, the point lies beyond the right border of the screen.
• If `r.y < -1`, the point lies beyond the bottom border of the screen.
• If `r.y > 1`, the point lies beyond the top border of the screen.
• If `r.z < -1`, the point lies beyond the near plane of the camera, i.e., the point is behind the camera or too close for the camera to see.
• If `r.z > 1`, the point lies beyond the far plane of the camera, i.e., the point is too far away for the camera to see.
• Otherwise, the point is in the visible range of the camera.

Question:

Given the surfaceNormal (gl_NormalMatrix * gl_Normal) and a gl_Vertex how do I rotate the gl_Vertex such that it will adjust to that normal. I want to use this for billboards and general rotation.

2 Questions:

1. How would you rotate the gl_Vertex using the surfaceNormal (In the .vert shader)?
2. Should the rotation be done on the GPU (in the shader) or on the CPU? (Please adjust question #1 according to this question given 2 Vector3fs, one for the rotation (normal) the other for the vertex position if it should be done on the CPU)

Thanks!

In most of cases, the rotation should be done on the CPU, by the way of the model matrix (or directly world matrix).

Even if the CPU is slower than the GPU, keep in mind a vertex shader will have been executed for each vertex, whereas a model matrix linked to a mesh, so a lot of vertices, shall be calculated only once per frame if your mesh is dynamic, and only once of your entire program if your mesh won't move.

Question:

I'm implementing a camera which responds to change of mouse position. It's a more question of maths than of coding but I'd like to know how to use it as well.

I have a Camera object which rotates along the Y-axis when the mouse changes its X-position. This works as intended and I can rotate around the cube I'm drawing just fine. Now I would like to implement looking up and down triggered by mouse change vertically but the X and Z-axis are relative to the camera object so I can't just rotate along the X-axis but have to combine the X and Z-axis to do this in a fluid motion.

```public class Camera {

public float moveSpeed = 0.05f;

private Vector3f position, rotation;
private float oldMouseX, oldMouseY, newMouseX, newMouseY, mouseSensitivity;

public Camera () {
position = new Vector3f(0f, 0f, 0f);
rotation = new Vector3f(0f, 0f, 0f);

mouseSensitivity = 0.1f;
oldMouseX = 0.0f;
oldMouseY = 0.0f;
newMouseX = 0.0f;
newMouseY = 0.0f;
}

public Camera (Vector3f pos, Vector3f rot) {
this.position = pos;
this.rotation = rot;

mouseSensitivity = 0.1f;
oldMouseX = 0.0f;
oldMouseY = 0.0f;
newMouseX = 0.0f;
newMouseY = 0.0f;
}

public void setCursor (int x, int y) {
oldMouseX = x;
oldMouseY = y;
newMouseX = x;
newMouseY = y;
}

public Matrix4f getViewMatrix () {
Matrix4f rotateX = new Matrix4f().rotate(rotation.x * (float)Math.PI / 180f, new Vector3f(1f, 0f, 0f));
Matrix4f rotateY = new Matrix4f().rotate(rotation.y * (float)Math.PI / 180f, new Vector3f(0f, 1f, 0f));
Matrix4f rotateZ = new Matrix4f().rotate(rotation.z * (float)Math.PI / 180f, new Vector3f(0f, 0f, 1f));

Matrix4f rotation = MatrixMath.mul(rotateX, MatrixMath.mul(rotateZ, rotateY));

Vector3f negPosition = new Vector3f(-position.x, -position.y, -position.z);
Matrix4f translation = new Matrix4f().translate(negPosition);

return MatrixMath.mul(translation, rotation);
}
public Vector3f getPosition() {
return position;
}

public Vector3f getRotation() {
return rotation;
}

public void update (Window window) {

if (window.isKeyDown(GLFW.GLFW_KEY_W)) {
position.x += Math.sin(Math.PI * rotation.y / 180) * -moveSpeed;
position.z += Math.cos(Math.PI * rotation.y / 180) * moveSpeed;
}

if (window.isKeyDown(GLFW.GLFW_KEY_S)) {
position.x -= Math.sin(Math.PI * rotation.y / 180) * -moveSpeed;
position.z -= Math.cos(Math.PI * rotation.y / 180) * moveSpeed;
}

if (window.isKeyDown(GLFW.GLFW_KEY_D)) {
position.x += Math.sin(Math.PI * (rotation.y - 90) / 180) * -moveSpeed;
position.z += Math.cos(Math.PI * (rotation.y - 90) / 180) * moveSpeed;
}

if (window.isKeyDown(GLFW.GLFW_KEY_A)) {
position.x -= Math.sin(Math.PI * (rotation.y - 90) / 180) * -moveSpeed;
position.z -= Math.cos(Math.PI * (rotation.y - 90) / 180) * moveSpeed;
}

if (window.isKeyDown(GLFW.GLFW_KEY_SPACE)) {
}

if (window.isKeyDown(GLFW.GLFW_KEY_LEFT_SHIFT)) {
}

newMouseX = (float)window.getMouseX();
newMouseY = (float)window.getMouseY();

float dx = newMouseX - oldMouseX;
float dy = newMouseY - oldMouseY;

if (window.isMouseButtonDown(GLFW.GLFW_MOUSE_BUTTON_LEFT)) {
rotation.y += dx * mouseSensitivity;
}

//unPos = unPos.rotateAxis(dy * mouseSensitivity, (float)Math.cos(Math.PI * rotation.y / 180), 0f, (float)Math.sin(Math.PI * rotation.y / 180));

//      rotation.x += (float)Math.cos(rotation.y * Math.PI / 180) * (dy * mouseSensitivity);
//      rotation.z += (float)Math.sin(rotation.y * Math.PI / 180) * (dy * mouseSensitivity);

oldMouseX = newMouseX;
oldMouseY = newMouseY;
}
}
```

I don't think it's necessary to show you my Window class as the functions are quite self-explanitory. As you can see the part at the bottom that I commented out was my approach to solving the problem and at first it seemed to work but the rotating was slightly off.

I expect fluid up and down motion(that is, relative to the camera) but receive a weird rolling motion.

Any help is greatly appreciated!

I fixed my problem. It's strange but I have to multiply the Y-rotation matrix by the X-rotation matrix. That doesn't make sense to me but it works. Thanks for your help!

Question:

In my game (3D game based on LWJGL) I walk in a voxel (block) world. The character goes blocks up and down quite fast so I want the camera too smoothly follow the character. I tried interpolation but the point, which I have to interpolate to is changing all the time, because the character is not taking the step at once (about 5-8 frames). This leads to some shaking which doesn't look nice. Is there a way to do this better?

Greetings

```cameraYPosition = MathHelper.sinerp(cameraYCorrectionPoint, player.y, ((float)glfwGetTime() - cameraYCorrectionTime) * (1/cameraYCorrectionDuration));
```

This is the deciding line. The cameraYCorrectionPoint is the point, where the camera started to interpolate, while player.y is the position to interpolate towards (which can obviously change every frame). The other part is to calculate the time passed and scaling it up so it ranges from 0 to 1.

This isn't really working, since the position can change again before the initial interpolation is done, resulting into ugly interpolation. So what can I do for a better approach?

Here the rest of the code, but I don't think this will help much:

```if(cameraYPosition == Float.MIN_VALUE) cameraYPosition = player.y;

if(player.y != cameraYPosition) {
if(cameraYCorrectionPoint == Float.MIN_VALUE) {
cameraYCorrectionPoint = cameraYPosition;
cameraYCorrectionTime = (float)glfwGetTime();
}

if(cameraYCorrectionTime <= glfwGetTime()) {
float absoluteDistance = cameraYCorrectionPoint - player.y;

float relativeDistance = cameraYPosition - player.y;

if(glfwGetTime() - cameraYCorrectionTime <= cameraYCorrectionDuration) {
cameraYPosition = MathHelper.sinerp(cameraYCorrectionPoint, player.y, ((float)glfwGetTime() - cameraYCorrectionTime) * (1/cameraYCorrectionDuration));
}
else {
cameraYPosition = player.y;
}

lastPlayerY = player.y;

if(absoluteDistance > 0 && cameraYPosition < player.y || absoluteDistance < 0 && cameraYPosition > player.y) {
cameraYCorrectionDuration = cameraYBaseCorrectionDuration;

if(relativeDistance < 0 && cameraYPosition - player.y > 0 || relativeDistance > 0 && cameraYPosition - player.y < 0) {
cameraYPosition = player.y;
}
else {
cameraYCorrectionPoint = cameraYPosition;
}
}
}
}
else {
cameraYCorrectionDuration = cameraYBaseCorrectionDuration;

cameraYCorrectionPoint = Float.MIN_VALUE;
}
```

And my interpolation code:

```package main;

public class MathHelper {
public static float hermite(float start, float end, float value) {
return lerp(start, end, value * value * (3.0f - 2.0f * value));
}

public static float sinerp(float start, float end, float value) {
return lerp(start, end, (float)Math.sin(value * Math.PI * 0.5f));
}

public static float coserp(float start, float end, float value) {
return lerp(start, end, 1.0f - (float)Math.cos(value * Math.PI * 0.5f));
}

public static float lerp(float start, float end, float value) {
return ((1.0f - value) * start) + (value * end);
}
}
```

You can model the camera a bit differently. Split the position into a current position and a target position. When you want to move the camera, only move the target position. Then, each frame, update the current camera position to get closer to the target position. I have good experience with the following update:

```factor = f ^ t
currentCamera = factor * currentCamera + (1 - factor) * targetCamera
```

`f` is a factor, which lets you choose how immediate the reaction of the camera will be. Higher values will result in a very lose motion, a value of 0 will make the camera follow exactly its target. `t` is the time since the last update call. If `t` is measured in milliseconds, `f` should have values between 0.95 and 0.99.

Question:

I need a line that would be part of the user interface, but it always pointed to a specific place in 3D space.

To do this, I try:

```            double camY, camX;
camX = cameraX * -1;
if(camX > 90)
camX = 180 - camX;
if(camX < -90)
camX = -180 - camX;

camY = cameraY;
double camY2 = camY;
if(camY > 90)
camY2 = 180 - camY;
if(camY < -90)
camY2 = -180 - camY;

double x1;
double y1;
double x2 = x * (90.0 - Math.abs(camY)) / 90.0,
y2 = (y * (90.0 - Math.abs(camY)) / 90.0);
if(vertical) {
x1 = x2 + (y * (camY2 / 90.0) * ((90 - camX) / 90));
y1 = (y2 * ((90 - camX) / 90)) - (x * (camY2 / 90.0));
}else{
x1 =  x2 + (y * (camY2 / 90.0));
y1 = y2 - (x * (camY2 / 90.0));
y1 = y1 * (camX / 90.0);
}
GL11.glVertex2d(x1, y1);
GL11.glVertex2d(toX, toY);
GL11.glVertex2d(toX, toY);
GL11.glVertex2d(max, toY);
```

Where `x` and `y` - coordinate of point on 3D space. `cameraX` and `cameraY` - angle of camera rotate. `toX`` and`toY``` destination point on the camera plane (user interface).

All this code runs before the camera (

```        GL11.glOrtho(-max, max, -1, 1, 10, -10);
glRotatef(cameraX, 1f, 0f, 0);
glRotatef(cameraY, 0f, 1f, 0);
```

) and before `GL11.glMatrixMode(GL11.GL_MODELVIEW);`. Therefore, it ignores the Z coordinate.

Initially, I had only the last 4 lines of code, but then when the camera was rotated, the entire line moved behind it. So I added the rest of the calculations.

This partially solves my problem.

Initial position:

Camera rotation on the y axis:

Small line deviations are already visible.

Camera rotation on the x and y axis:

As you can see in the screenshots, the red line still shifts when the camera rotates. How can I make it always be at the point I need in spaces? (In the center of the red circle in the screenshots)

You should:

1. Project the 3D point you want the line to end to the screen manually. For that

• Get the model-view and projection matrices with `glGetFloatv`, `GL_MODELVIEW_MATRIX` and `GL_PROJECTION_MATRIX`.
• Multiply your 3d point by the matrices, perform a perspective division, and convert to your viewport coordinates.
2. Draw a 2d line from the UI location you want it to begin to the projected 2d location you want it to end.

Question:

I'm currently working on a small fps project for testing, although I am familiar with OpenGl (LWJGL). My problem here is that the rotation of the camera is not very smooth. It "jumps" from pixel to pixel, which is actually very obvious. How can I smoothen it out? [Link to footage:] https://www.youtube.com/watch?v=6Hgt1hXCKKA&feature=youtu.be

Summary of my code: I'm storing the current mouse position in a Vector2f;

I'm increasing yaw and pitch by the relative movement of the camera (new position - old position);

I'm moving the mouse to the center of the window

I'm storing the currennt position (center of the window) in the old position Vector2f

One possible way is to treat the (delta) input of your input device (mouse, keyboard, whatever) not as absolute values for your new camera position or rotation angles, but to treat them as impulse or force to move/rotate in a certain direction. You would then simply use integration over some time differentials `dt` to update the camera position/rotation with some damping/friction factor to reduce the translational or angular momentum of the camera for it to quickly come to a stop. This would be a somewhat physical simulation. Another possible approach is via parametric interpolation: Whenever you receive a (delta) input of your input device, you calculate a new "desired target position or rotation angle" from that and then interpolate between the current and target state over time to reach that target.

Question:

(I am using a LibGDX framework which is basically just LWJGL(Java) with OpenGL for rendering) Hi, I'm trying to render a laser beam, so far I've got this effect,

It's just a rectangle and then the whole effect is done in fragment Shader.

However, as it is a laser beam, I want the rectangle to face a camera, so the player always sees this red transparent "line" everytime. And this is driving me crazy. I tried to do some billboarding stuff, however what I want isn't really billboarding. I just want to rotate it on Z axis so that the player always sees the whole line, that's all. No X and Y rotations.

As you can see, that's what I want. And it's not billboarding at all.

If it was billboarding, it would look like this: .

I also tried to draw cylinder and the effect based on gl_FragCoord, which was working fine, but the coords were varying(sometimes the UVs were 0 and 1, sometimes 0 and 0.7) and it was not sampling whole texture, so the effect was broken.

Thus I don't even know what to do now. I would really appreciate any help. Thanks in advance.

```attribute vec3 a_position;
attribute vec2 a_texCoord0;

uniform mat4 u_worldTrans; //model matrix
uniform mat4 u_view; //view matrix
uniform mat4 u_proj; // projection matrix

varying vec2 v_texCoord0;

void main() {
v_texCoord0 = a_texCoord0;

vec4 worldTrans = u_worldTrans * vec4(a_position, 1.0);

gl_Position = u_proj * u_view * worldTrans;
}
```

```#ifdef GL_ES
precision mediump float;
#endif

varying vec2 v_texCoord0;

uniform sampler2D tex; //texture I apply the red color onto. It's how I get the smooth(transparent) edges.

void main() {
vec4 texelColor = texture2D( tex, v_texCoord0 ); //sampling the texture
vec4 color = vec4(10.0,0.0,0.0,1.0); //the red color

float r = 0.15; //here I want to make the whole texture be red, so when there's less transparency, I want it to be more red, and on the edges(more transparency) less red.
if (texelColor.a > 0.5) r = 0.1;

gl_FragColor = vec4(mix(color.rgb,texelColor.rgb,texelColor.a * r),texelColor.a); //and here I just mix the two colors into one, depengind on the alpha value of texColor and the r float.
}
```

The texture is just a white line opaque in the middle, but transparent at the edges of the texuture. (smooth transition)

If you use DecalBatch to draw your laser, you can do it this way. It's called axial billboarding or cylindrical billboarding, as opposed to the spherical billboarding you described.

The basic idea is that you calculate the direction the sprite would be oriented for spherical billboarding, and then you do a couple of cross products to get the component of that direction that is perpendicular to the axis.

Let's assume your laser sprite is aligned to point up and down. You would do this series of calculations on every frame that the camera or laser moves.

```//reusable calculation vectors
final Vector3 axis = new Vector3();
final Vector3 look = new Vector3();
final Vector3 tmp = new Vector3();

void orientLaserDecal (Decal decal, float beamWidth, Vector3 endA, Vector3 endB, Camera camera) {
axis.set(endB).sub(endA); //the axis direction

decal.setDimensions(beamWidth, axis.len());

axis.scl(0.5f);
tmp.set(endA).add(axis); //the center point of the laser

decal.setPosition(tmp);

look.set(camera.position).sub(tmp); //Laser center to camera. This is
//the look vector you'd use if doing spherical billboarding, so it needs
tmp.set(axis).crs(look); //Axis cross look gives you the
//right vector, the direction the right edge of the sprite should be
//pointing. This is the same for spherical or cylindrical billboarding.
look.set(tmp).crs(axis); //Right cross axis gives you an adjusted
//look vector that is perpendicular to the axis, i.e. cylindrical billboarding.

decal.setRotation(look.nor(), axis); //Note that setRotation method requires
//direction vector to be normalized beforehand.
}
```

I didn't check to make sure the direction doesn't get flipped, because I draw it with back face culling turned off. So if you have culling on and don't see the sprite, that last cross product step might need to have its order reversed so the look vector points in the opposite direction.

Question:

I have this camera that is set up with `vecmath.lookatMatrix(eye, center, up)`. The movement works fine, forwards, backwards, right, left, these work fine. What does not seem to work fine is the rotation.

I am not really good at math, so I assume I may be missing some logic here, but I thought the rotation would work like this: On rotation around the Y-axis I add/sub a value to the X value of the center vector. On rotation around the X-axis I add/sub a value to the Y value of the center vector. For example here is rotation to the right: `center = center.add(vecmath.vector(turnSpeed, 0, 0))`

This actually works, but with some strange behaviour. It looks like the higher the x/y of the center vector value gets, the slower the rotation. I guess it's because through the addition/substraction to the center vector it moves too far away or something similar, I would really like to know what is actually happening.

Actually while writing this, I just realized this can't work like this, because once I have moved around and rotated a bit, and for example I'm in "mid air", the rotation would be wrong....

I really hope someone can help me here.

Rotating a vector for OpenGL should be done using matrices. Linear movement can be executed by simply adding vectors together, but for rotation it is not enough just to change one of the coordinates... if that was the case, how'd you get from (X,0,0) direction to (0,X,0)? Here is another tutorial, which is C++, but there are Java samples too. There is a bit of math behind all this - you seem to be familiar with vectors, and probably have a 'feel' of them, which helps. EDIT - if you are to use matrices in OpenGL properly, you'll need to familiarize yourself with the MVP concepts. You have something to display (the model) which is placed somewhere in your world (view) at which you are looking through a camera (projection).

Question:

I am using a camera that has a yaw, a pitch, and a roll. When yaw == 0 the camera is looking down the -z axis(yaw == 90 is positive x), when pitch == 270 the camera is looking up(pitch == 0 is looking straight), and when roll == 180 the camera is upside down.

The camera's yaw, pitch, and roll values are never less than zero or greater than 360(when any value approaches 0 or 360 when it passes that amount it is automatically moved to the 'other side').

I have implemented 3DoF and it works quite nicely; however, when I implemented 6DoF, everything appears to work until the roll is around 90 or 270, then strange things occur to the up and right vectors(forward always seems to work because roll rotates around that axis?)

The scene I am rendering is just a bunch of blocks(in minecraft-style chunks) and I am always able to move forward/backward and use the forward vector to target a block so I know that the forward vector is done.

Here is my initGL:

```public void initGL() {
GL11.glEnable(GL11.GL_TEXTURE_2D);
GL11.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
GL11.glClearDepth(1.0);
GL11.glEnable(GL11.GL_DEPTH_TEST);
GL11.glDepthFunc(GL11.GL_LEQUAL);

GL11.glMatrixMode(GL11.GL_PROJECTION);

GLU.gluPerspective(fov, ((float) Display.getWidth()) / ((float) Display.getHeight() != 0 ? Display.getHeight() : 1), 0.1f, 100.0f);//fov is 45.0f

GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glHint(GL11.GL_PERSPECTIVE_CORRECTION_HINT, GL11.GL_NICEST);
}
```

Here is where I rotate and translate to my camera's view:

```public final void lookThrough() {
GL11.glRotatef(this.roll, 0.0f, 0.0f, 1.0f);
GL11.glRotatef(this.pitch, 1.0f, 0.0f, 0.0f);
GL11.glRotatef(this.yaw, 0.0f, 1.0f, 0.0f);
GL11.glTranslatef(-this.position.x, -this.position.y, -this.position.z);
}
```

And here are my six degrees of freedom calculations:

```public static final double  zeroRad         = Math.toRadians(0);

public static final strictfp void updateLookVectorsIn6DoF(Vector3f yawPitchAndRoll, Vector3f forward, Vector3f up, Vector3f right) {

final float sinYaw = ((float) Math.sin(yaw));
final float cosYaw = ((float) Math.cos(yaw));

final float sinYaw90 = ((float) Math.sin(yaw + ninetyRad));
//final float sinYaw180 = ((float) Math.sin(yaw + oneEightyRad));
final float cosYaw270 = ((float) Math.cos(yaw - ninetyRad));

final float sinRoll = ((float) Math.sin(roll));
final float cosRoll = ((float) Math.cos(roll));
//final float sinRoll180 = ((float) Math.sin(roll + oneEightyRad));

final float cosPitch90 = ((float) Math.cos(pitch + ninetyRad));
//final float cosPitch270 = ((float) Math.cos(pitch + twoSeventyRad));
final float sinPitch90 = ((float) Math.sin(pitch + ninetyRad));
final float sinPitch270 = ((float) Math.sin(pitch - ninetyRad));

//Forward:(No roll because roll goes around the Z axis and forward movement is in that axis.)
float x = sinYaw * ((float) Math.cos(pitch));
float y = -((float) Math.sin(pitch));
float z = cosYaw * ((float) Math.cos(pitch - oneEightyRad));
forward.set(x, y, z);

//cos(90) = 0, cos(180) = -1, cos(270) = 0, cos(0) = 1
//sin(90) = 1, sin(180) = 0, sin(270) = -1, sin(0) = 0

//Up: Strange things occur when roll is near 90 or 270 and yaw is near 0 or 180
x = -(sinYaw * cosPitch90) * cosRoll - (sinRoll * sinYaw90);
y = -sinPitch270 * cosRoll;
z = (cosYaw * cosPitch90) * cosRoll + (sinRoll * cosYaw270);
up.set(x, y, z);
//Right: Strange things occur when roll is near 90 or 270 and pitch is near 90 or 270
x = (cosRoll * sinYaw90) - (sinRoll * (sinYaw * cosPitch90));
y = 0 - (sinRoll * sinPitch90);//This axis works fine
z = (cosRoll * cosYaw270) + (sinRoll * (sinYaw * cosPitch90));
right.set(x, y, z);
}
```

I did find a very similar question here, but it uses matrices and quaternions and I don't want to have to do that unless I absolutely have to(and I was careful to try to multiply roll pitch and yaw in the correct order): LWJGL - Problems implementing 'roll' in a 6DOF Camera using quaternions and a translation matrix

So I finally got the hang of the meaning of cos and sin(but don't ask me to teach it) and was able to get this working!

Here is the new and improved code:

```public static final double  zeroRad         = Math.toRadians(0);

public static final strictfp void updateLookVectorsIn6DoF(Vector3f yawPitchAndRoll, Vector3f forward, Vector3f up, Vector3f right) {

final float sinYaw = ((float) Math.sin(yaw));
final float cosYaw = ((float) Math.cos(yaw));

final float sinYaw90 = ((float) Math.sin(yaw + ninetyRad));
final float cosYaw90 = ((float) Math.cos(yaw + ninetyRad));
final float cosYaw180 = ((float) Math.cos(yaw + oneEightyRad));

final float sinRoll = ((float) Math.sin(roll));
final float cosRoll = ((float) Math.cos(roll));
final float cosRoll180 = ((float) Math.cos(roll + oneEightyRad));

final float cosPitch90 = ((float) Math.cos(pitch + ninetyRad));
final float sinPitch90 = ((float) Math.sin(pitch + ninetyRad));
final float sinPitch270 = ((float) Math.sin(pitch - ninetyRad));

//Forward:(No roll because roll goes around the Z axis and forward movement is in that axis.)
float x = sinYaw * ((float) Math.cos(pitch));
float y = -((float) Math.sin(pitch));
float z = cosYaw * ((float) Math.cos(pitch - oneEightyRad));
forward.set(x, y, z);

//Multiply in this order: roll, pitch, yaw
//cos(90) = 0, cos(180) = -1, cos(270) = 0, cos(0) = 1
//sin(90) = 1, sin(180) = 0, sin(270) = -1, sin(0) = 0

//hmm... gimbal lock, eh? No!

//Up://
x = (cosRoll180 * cosPitch90 * sinYaw) - (sinRoll * cosYaw180);
y = -sinPitch270 * cosRoll;
z = (cosRoll * cosPitch90 * cosYaw) + (sinRoll * sinYaw);
up.set(x, y, z);
//Right:
x = (cosRoll * sinYaw90) - (sinRoll * cosPitch90 * cosYaw90);
y = 0 - (sinRoll * sinPitch90);//This axis works fine
z = (cosRoll * cosYaw270) + (sinRoll * cosPitch90 * sinYaw270);
right.set(x, y, z);
}
```

Question:

I am trying to rotate a first person view "camera" around it's own coordinates, instead, it is getting rotated around the origin. Here is my current code for the camera translation and rotation.

```    if (Keyboard.isKeyDown(Keyboard.KEY_W)) {
xMod -= 0.0025f * (float)delta * (float)Math.sin(Math.toRadians(camera.rotation.y));
zMod += 0.0025f * (float)delta * (float)Math.cos(Math.toRadians(camera.rotation.y));
}
if (Keyboard.isKeyDown(Keyboard.KEY_S)) {
xMod += 0.0025f * (float)delta * (float)Math.sin(Math.toRadians(camera.rotation.y));
zMod -= 0.0025f * (float)delta * (float)Math.cos(Math.toRadians(camera.rotation.y));
}
if (Keyboard.isKeyDown(Keyboard.KEY_A)) {
xMod -= 0.0025f * (float)delta * (float)Math.sin(Math.toRadians(camera.rotation.y-90));
zMod += 0.0025f * (float)delta * (float)Math.cos(Math.toRadians(camera.rotation.y-90));
}
if (Keyboard.isKeyDown(Keyboard.KEY_D)) {
xMod -= 0.0025f * (float)delta * (float)Math.sin(Math.toRadians(camera.rotation.y+90));
zMod += 0.0025f * (float)delta * (float)Math.cos(Math.toRadians(camera.rotation.y+90));
}

if (Keyboard.isKeyDown(Keyboard.KEY_ESCAPE)) {
Mouse.setGrabbed(false);
}

if (Mouse.isButtonDown(0)) {
Mouse.setGrabbed(true);
}

if (Mouse.isGrabbed()) {
camera.rotation.y += (Mouse.getDX() * 0.005f) * delta;
camera.rotation.x += (Mouse.getDY() * -0.005f) * delta;
}
if (camera.rotation.x >= 90f) {
camera.rotation.x = 90f;
}

else if (camera.rotation.x <= -90f) {
camera.rotation.x = -90f;
}

if (Mouse.isGrabbed()) {
camera.position.x += xMod;
camera.position.z += zMod;
}

camera.reset();

Matrix4f.translate(camera.rotation, camera.matrix(Camera.VIEWMATRIX), camera.matrix(Camera.VIEWMATRIX));
Matrix4f.scale(camera.scale, camera.matrix(Camera.VIEWMATRIX), camera.matrix(Camera.VIEWMATRIX));
```

camera.reset() does this...

```public void reset() {
viewMatrix = new Matrix4f();
}
```

essentially reseting the view matrix

also, camera.rotation is a vector3f and camera.matrix returns a matrix, either Camera.ViewMatrix or Camera.ProjectionMatrix

```Matrix4f.translate(camera.rotation, camera.matrix(Camera.VIEWMATRIX), camera.matrix(Camera.VIEWMATRIX));
does not really make sense. I'll just assume you have `camera.position` here (that would be wrong, too, but I'm coming to that later.
With marix math, `(A * B) ^ -1` is the same as `B^-1 * A^-1`. so when you want to define the camera that way, you have to use the reverse order (of the reverse order, ending up in the order as you write things down), but with each transformation inverted. You will need the rotations with a negated angle, followed by a translation with the negated position, to make this work.