Hot questions for Using Lightweight Java Game Library in 2d

Question:

After two hours of googling (here, here, here, here, and here, and a ton others which I am not bothered to find), I thought I had finally learnt the theory of turning 3D coordinates to 2D coordinates. But it isn't working. The idea is to translate the 3D coordinates of a ship to 2D coordinates on the screen to render the username of the player controlling that ship.

However, the text is rendering in the wrong location:

The text is "Test || 2DXCoordinate || 2DZCoordinate".

Here is my getScreenCoords() - Which converts the 3D coordinates to 2D.

public static int[] getScreenCoords(double x, double y, double z) {
    FloatBuffer screenCoords = BufferUtils.createFloatBuffer(4);
    IntBuffer viewport = BufferUtils.createIntBuffer(16);
    FloatBuffer modelView = BufferUtils.createFloatBuffer(16);
    FloatBuffer projection = BufferUtils.createFloatBuffer(16);
    GL11.glGetFloat(GL11.GL_MODELVIEW_MATRIX, modelView);
    GL11.glGetFloat(GL11.GL_PROJECTION_MATRIX, projection);
    GL11.glGetInteger(GL11.GL_VIEWPORT, viewport);
    boolean result = GLU.gluProject((float) x, (float) y, (float) z, modelView, projection, viewport, screenCoords);
    if (result) {
        return new int[] { (int) screenCoords.get(0), (int) screenCoords.get(1) };
    }
    return null;
}

screenCoords.get(0) is returning a perfect X coordinate. However, screenCoords.get(1) is going higher or lower depending on how far away I am from the ship. After many hours of debugging, I have narrowed it down to this line being incorrect:

GLU.gluProject((float) x, (float) y, (float) z, modelView, projection, viewport, screenCoords);

However, I have no idea what is wrong. The X coordinate of the ship is fine.... Why not the Y?

According to BDL's answer, I am supplying the "wrong matrix" to gluProject(). But I don't see how that is possible, since I call the method right after I render my ship (Which is obviously in whatever matrix draws the ship).

I just can't fathom what is wrong.

Note: BDL's answer is perfectly adequate except that it does not explain why the Y coordinates are incorrect.

Note: This question used to be much longer and much more vague. I have posted my narrowed-down question above after hours of debugging.


Answer:

You have to use the same projection matrix in gluProject that you use for rendering your ship. In your case the ship is rendered using a perspective projection, but when you call gluProject a orthographic projection is used.

General theory about coordinate systems in OpenGL

In most cases geometry of a model in your scene (e.g. the ship) is given in a model-coordinate system. This is the space where your vertex coordinates exist. When now placing the model in your scene we apply the model-matrix to each vertex to get the coordinates the ship has in the scene. This coordinate system is called world space. When viewing the scene from a given viewpoint and a viewing direction, again a transformation is needed that transforms the scene such that the viewpoint is located in the origin (0,0,0) and view-direction is along the negativ z-axis. This is the view coordinate system. The last step is to transform view-coordinates into ndc, which is done via a projection matrix.

In total we get the transformation of a vertex to the screen as:

 v_screen = Projection * View * Model * v_model

In ancient OpenGL (as you use it) View and Model are stored together in the ModelView matrix.

(I skipped here some problems as perspective divide, but it should be sufficient to understand the problem.)

Your problem

You already have a position in world space (x,y,z) of your ship. Thus the transformation with Model has already happend. What is left is

v_screen = Projection * View * v_worldspace

For this we see, that in our case the ModelView matrix that gets entered to gluProject has to be exactly the View matrix.

I can't tell you where you get the view matrix in your code, since I don't know this part of your code.

Question:

Similar to the game Factorio im trying to create "3D" terrain but of course in 2D Factorio seems to do this very well, creating terrain that looks like this

Where you can see edges of terrain and its very clearly curved. In my own 2D game Ive been trying to think of how to do the same thing, but all the ways I can think of seem to be slow or CPU intensive. Currently my terrain looks like this:

Simply 2D quads textured and drawn on screen, each quad is 16x16 (except the water thats technically a background but its not important now), How could I even begin to change my terrain to look more like a Factorio or other "2.5D" games, do they simply use different textures and check where the tile would be relative to other tiles? Or do they take a different approach?

Thanks in advance for your help!


Answer:

The example you provided are still 2D textures (technically). But since the textures themselves are 'fancy 3D', they appear to be 3D/2D angled.

So your best bet would be to upgrade your textures. (and add shadow to entities for extra depth).

Edit:

The edges you asked about are probably layed-out by checking if a 'tile' is an edge, and if so it adds an edge-texture on top the background. While the actual tile itself is also a flat image (just like the water). Add some shadow afterwards and the 3D illusion is complete.

I hope this answers your question, otherwise feel free to ask clarification.

Question:

I am currently converting the 3D coordinates of an object to 2D coordinates, and then drawing 2D text over them (at the moment, the object name):

public static int[] getScreenCoords(double x, double y, double z) {
    FloatBuffer screenCoords = BufferUtils.createFloatBuffer(4);
    IntBuffer viewport = BufferUtils.createIntBuffer(16);
    FloatBuffer modelView = BufferUtils.createFloatBuffer(16);
    FloatBuffer projection = BufferUtils.createFloatBuffer(16);
    GL11.glGetFloat(GL11.GL_MODELVIEW_MATRIX, modelView);
    GL11.glGetFloat(GL11.GL_PROJECTION_MATRIX, projection);
    GL11.glGetInteger(GL11.GL_VIEWPORT, viewport);
    boolean result = GLU.gluProject((float) (x), (float) y, (float) (z), modelView, projection, viewport, screenCoords);
    if (result) {
        return new int[] { (int) screenCoords.get(0), (int) screenCoords.get(1)};
    }
    return null;
}

This works correctly. Names are successfully placed above objects.

The problem is that if I look in the opposite direction, I see "ghost" names on the other side:

How can I stop this problem? Is there some way I can detect if I'm looking away from them and not render them?


Answer:

Check the sign of the screen's z-value screenCoords.get(2) (look/front is in negative z-direction):

...
boolean result = GLU.gluProject((float) (x), (float) y, (float) (z), modelView, projection, viewport, screenCoords);
if (result && screenCoords.get(2) < 0) {
    return new int[] { (int) screenCoords.get(0), (int) screenCoords.get(1)};
}
return null;

Question:

I'm trying to make a texture from an array of integers using this code from these two classes. When I bind the texture i just get black.

public class PTexture {

private int id;
private int width;
private int height;

public PTexture(int id, int width, int height)
{
    this.id = id;
    this.width = width;
    this.height = height;
}

public Vector2f[] getRectPortionCoords(int x, int y, int w, int h)
{
    Vector2f[] res = new Vector2f[4];

    res[0] = new Vector2f((float)x / w, (float)y / height);
    res[1] = new Vector2f((float)(x + w) / width,(float)y / height);
    res[2] = new Vector2f((float)(x + w) / width, (float)(y + h) / height);
    res[3] = new Vector2f((float)x / w, (float)(y + h) / height);

    return res;
}

public void bind()
{
    glBindTexture(GL_TEXTURE_2D, id);
}

public int getId() {
    return id;
}

public int getWidth() {
    return width;
}

public int getHeight() {
    return height;
}

public void setId(int id) {
    this.id = id;
}

}

public class TerrainBlend extends PTexture {

private int[] pixels;

public TerrainBlend(int id, int width, int height) {
    super(id, width, height);
    pixels = new int[width * height];
}

public void genPixelsFromHeight(float[][] hMap, Vector3f colorHeights)
{
    for (int y = 0; y < getHeight(); y++)
    {
        for (int x = 0; x < getWidth(); x++)
        {
            if (hMap[x][y] >= colorHeights.getX())
            {
                genPixel(x, y, 0xFF0000FF);
                if (hMap[x][y] >= colorHeights.getY())
                {
                    genPixel(x, y, 0x00FF00FF);
                    if (hMap[x][y] >= colorHeights.getZ())
                    {
                        genPixel(x, y, 0x0000FFFF);
                    }
                }
            }
        }
    }

    genTexture();
}

private void genPixel(int x, int y, int color)
{
    pixels[x + y * getWidth()] = color;
}

public void genTexture()
{   
    IntBuffer iBuffer = BufferUtils.createIntBuffer(getWidth() * getHeight() * 4);
    for (int i = 0; i < pixels.length; i++)
    {
        iBuffer.put(pixels[i]);
    }
    iBuffer.position(0);

    setId(glGenTextures());
    glBindTexture(GL_TEXTURE_2D, getId());
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, getWidth(), getHeight(), 0, GL_RGBA, GL_UNSIGNED_INT, iBuffer);
}

}

I've made sure the values in the array are correct and they are so I'm assuming the OpenGL code is wrong in some way.


Answer:

You got a couple of problems:

  • The type you pass to the glTexImage2D() will not work the way you probably intended:

    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, getWidth(), getHeight(), 0,
                 GL_RGBA, GL_UNSIGNED_INT, iBuffer);
    

    Using GL_UNSIGNED_INT for a texture with a GL_RGBA internal format means that a GLuint value from the buffer will be used for each component of the texture. I.e. four values will be used for each texel, one each for R, G, B, and A.

    Based on how you build the values, it looks like you have the RGBA components for each texel packed in one value. To have the data interpreted that way, you need to specify a format that specifies packed values:

    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, getWidth(), getHeight(), 0,
                 GL_RGBA, GL_UNSIGNED_INT_8_8_8_8_REV, iBuffer);
    
  • You need to specify texture parameters. Particularly, the default sampling attributes assume that the texture has mipmaps, which your texture does not. At the very least, you need a call to set the minification mask to no use mipmaps:

    glTexParameter(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    

Question:

I have been trying for hours to get a Texture in LWJGL to stretch to a quad.

Here is the code I am using for the quad:

private static void renderLoad() {
    glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);

    texture.bind();

    glPushMatrix();{
    glBegin(GL_QUADS);
    {
        glTexCoord2f(0, 1);
        glVertex2f(0, 0); //Upper-left

        glTexCoord2f(1, 1);
        glVertex2f(Display.getWidth(), 0); //Upper-right

        glTexCoord2f(1, 0);
        glVertex2f(Display.getWidth(), Display.getHeight()); //Bottom-right

        glTexCoord2f(0, 0);
        glVertex2f(0, Display.getHeight()); //Bottom-left
    }
    glEnd();
    }glPopMatrix();
}

This is what the display looks like when I run it: http://gyazo.com/376ddb0979c55226d2f63c26215a1e12

I am trying to make the image expand to the size of the window. The Quad is at the size of the window, but the texture seems to not stretch.

Here is what it looks like if I do not use a texture and I simple make the quad a color: http://gyazo.com/65f21fe3efa2d3948de69b55d5c85424

If it helps here is my main loop:

glMatrixMode(GL_PROJECTION);
        glViewport(0, 0, displaySizeX, displaySizeY);
        glLoadIdentity();
        glOrtho(0, displaySizeX, 0, displaySizeY, 1, -1);
        glMatrixMode(GL_MODELVIEW);
        glEnable(GL_TEXTURE_2D);

        texture = loadLoadingImage();

        //This is the main loop for the game.
        while(!Display.isCloseRequested()){
            delta = getDelta();
            updateFPS();
            if(Display.wasResized()){
                displaySizeX = Display.getWidth();
                displaySizeY = Display.getHeight();
                glViewport(0, 0, displaySizeX, displaySizeY);
                glMatrixMode(GL_PROJECTION);
                glLoadIdentity();
                glOrtho(0, displaySizeX, 0, displaySizeY, -1, 1);
            }
            render();
            checkInput();
            Display.update();
            Display.sync(sync);
        }

        cleanUp();
        return true;

How do I make the image stretch to the quad?


Answer:

public void stretch() {
        Color.white.bind();
        texture.bind

        GL11.glBegin(GL11.GL_QUADS);
        GL11.glTexCoord2f(0,0);
        GL11.glVertex2f(100,100);
        GL11.glTexCoord2f(1,0);
        GL11.glVertex2f(100+texture.getTextureWidth(),100);
        GL11.glTexCoord2f(1,1);
        GL11.glVertex2f(100+texture.getTextureWidth(),100+character.getTextureHeight());
        GL11.glTexCoord2f(0,1);
        GL11.glVertex2f(100,100+texture.getTextureHeight());

        GL11.glEnd(); // all the 0's were originally 100 but it was off centered
    }

texture = TextureLoader.getTexture("PNG",ResourceLoader.getResourceAsStream("res/texture.png"));

Try using this. This is usually how I do this.

Question:

I have a OpenGL Texture, and I would like to change RGBA values of pixels in the Texture on the run. I want to do the modification on the CPU side. I would like to create a function to change a pixel in a texture in selected coordinates to a selected RGBA value.

I have tried the following:

glTexSubImage2D(GL_TEXTURE_2D,0,x,y,1,1,GL_RGBA,GL_UNSIGNED_BYTE,data);

where x and y are the coordinates of the modified pixel and data is an int array of red, green, blue and alpha. I however am not sure, if I have used the correct parameters, because the texture is not changing when i use this. I want to create a function, that changes a pixels color in a texture in specified coordinates to a specified color using glTexSubImage2D.


Answer:

You have to create a Direct buffer, to commit the data by glTexSubImage2D

I recommend to create a ByteBuffer, somehow like this:

ByteBuffer buffer = ByteBuffer.allocateDirect(data.length);
buffer.put(data);
buffer.flip();
glTexSubImage2D(GL_TEXTURE_2D,0,x,y,1,1,GL_RGBA,GL_UNSIGNED_BYTE,buffer);

If data already is a direct buffer, but it is a IntBuffer, then the 8th parameter of glTexSubImage2D, which specifies the data type of a single color channel, has to be GL_UNSIGNED_INT or GL_INT:

glTexSubImage2D(GL_TEXTURE_2D,0,x,y,1,1,GL_RGBA,GL_UNSIGNED_INT,data);

Question:

I am trying to make a 2D game in LWJGL. I am having a problem with terrain generation. I currently have an algorithm to generate terrain but it is always random and I can never get that same world again I would like to make an algorithm that generates a x and y coordinates based on a given number.

My current world generation looks like this:

     final float STEP_MAX = 1f;
     final float STEP_CHANGE = 1;
     final int HEIGHT_MAX = 100;

     double height = HEIGHT_MAX;
     double slope = STEP_MAX;

     for (int x = -WORLDSIZE; x < WORLDSIZE; x++) {
          height += slope;
          slope += (Math.random() * STEP_CHANGE) * 2 - STEP_CHANGE;

          if (slope > STEP_MAX)  slope = STEP_MAX;
          if (slope < -STEP_MAX) slope = -STEP_MAX;

          if (height > HEIGHT_MAX) { 
              height = HEIGHT_MAX;
              slope *= -1;
          }
          if (height < 0) { 
              height = 0;
              slope *= -1;
          }
          Tile newTile = new Tile(x*25,(int)height*25,25,25,TileType.Grass);
          tiles.add(newTile);

Thank you in advance for your help.


Answer:

If you create your random number generator yourself (rather than letting Math.random() do so for you), you can specify a seed:

Random random = new Random(yourSeed);
random.nextDouble();

the Random class also has many useful methods you might want to look at.

More info: https://docs.oracle.com/javase/8/docs/api/java/util/Random.html

Question:

I was wondering how I could create 2D lights like here:

https://www.youtube.com/watch?v=mVlYsGOkkyM

And here:

https://www.youtube.com/watch?v=nSf1MpsWKig

The shadows aren't currently in my interest. I tried some things but they don't seem to work. So all I currently have is the fragment and the vertex shader with hardly any stuff in it.


Answer:

Lighting is pretty much the main problem of computer graphics and there are several commonly used ways of achieving lights:

The simpler but limited way is called "foward shading" and the general idea is that you give all the information about lighting (ambient light, light positions and colors ect.) to the shader that renders your geometry and the lighting is computed directly on each surface you render. The limitation is that you can only pass fixed number of lights to the shader.

The other way is called "deferred shading" and it is commonly used in modern game engines. Instead of lighting the geometry when you render it you only collect the relevant data for each pixel of the geometry (position, color, normal ect.) and store it in a framebuffer. Then you can use the data to render as many lights as you want. OGLdev has a nice tutorial for deferred shading, but if you are a beginner you probably want to avoid it as it is quite difficult to set up and is slow on old hardware. http://ogldev.atspace.co.uk/www/tutorial35/tutorial35.html

Also the general lighting formula in GLSL is:

// Vector from the current pixel to the light
vec3 toLight = (lightpos - pixelpos);

// This computes how much is the pixel lit based on where it faces
float brightness = clamp(dot(normalize(toLight), pixelnormal), 0.0, 1.0);

// If it faces towards the light it is lit fully, if it is perpendicular
// to the direction towards the light then it is not lit at all.

// This reduces the brightness based on the distance form the light and the light's radius
brightness *= clamp(1.0 - (length(toLight) / lightradius), 0.0, 1.0);
// The final color of the pixel.
vec3 finalcolor = pixelcolor * lightcolor * brightness;
// If you have multiple lights multiply the pixel's color by the combined color of all lights
// like:
finalcolor = pixelcolor * (lightcolor1 * brightness1 + lightcolor2 * brightness2);

// Note that some things are clamped to avoid going into negative values

Question:

I planned to use OpenGL to render video stream.

First step i do after receiving first frame of the video is allocating direct byte buffer and putting all the frame fragments in it. The ByteBuffer is allocated only once.

directBuffer = ByteBuffer.allocateDirect(frameSize * fragmentCount);

When all frame fragments are in place, i'm passing the ByteBuffer to OpenGL renderer

public ByteBuffer getBuffer() {
    buffer.rewind();
    fragments.stream().forEach((frameFragment) -> {
        for (byte byteFragment : frameFragment.getFrameData()) {
            buffer.put(byteFragment);
        }
    });
    buffer.flip();
    return buffer;
}

The blocking queue in main scene loop is waiting for frame to be ready and then renders the scene.

ByteBuffer frame = framesQueue.take();

Afterwards im clearing the scene, setting viewport and so on

            glClear(GL_COLOR_BUFFER_BIT);
            glColor3f(1, 1, 1);
            glMatrixMode(GL_PROJECTION);
            glPushMatrix();
            glLoadIdentity();
            glOrtho(-480, 480, -270, 270, -1, 1);
            glPushMatrix();
            glViewport(0, 0, 768, 576);

When it is done, im ready to draw a textured quad onto the scene.

    glBindTexture(GL_TEXTURE_2D, glGenTextures());
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, 768, 576, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, buffer);
    glBegin(GL_QUADS);
    {
        glTexCoord2f(0.0f, 0.0f);
        glVertex2f(0.0f, 0.0f);

        glTexCoord2f(1.0f, 0.0f);
        glVertex2f(768, 0.0f);

        glTexCoord2f(1.0f, 1.0f);
        glVertex2f(768, 576);

        glTexCoord2f(0.0f, 1.0f);
        glVertex2f(0.0f, 576);
    }
    glEnd();

The program is running, the video is pretty smooth and has reasonably low latency (that was the main concern)

The problem is that method

        glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, 768, 576, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, buffer);

Is causing memory to leak.

The java heapspace seems fine

But the memory usage by java keeps growing infinitely.

For a test, i did comment execution of

   glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, 768, 576, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, buffer);

method, and the memory leak did not occur. Also i tried using drawPixels method, that also helped but i think that using textures is the way to go here, not deprecated drawPixels method.

How can i solve the memory leak problem? Alternatively, what are the other, efficient, ways to display new texture on scene every 40ms. Latency is critical.


Answer:

This call seemed to be a problem

glBindTexture(GL_TEXTURE_2D, glGenTextures());

Since im just using a single texture the call can be replaced with

glBindTexture(GL_TEXTURE_2D, 0);

That prevents OpenGL from creating NEW texture each call.

Question:

So I am trying to move a projectile in the direction that is indicated by the mouse position on the screen. I already have converted the mouse coordinates into in-game coordinates however I can't figure out how to correctly move the projectile into the proper direction. I am trying to use a slope to move the projectile but it doesn't seem to want to grab the correct slope so I end up with them flying in completely wrong directions. Here are some bits of code that I am using. Any help on this would be greatly appreciated as I am a little over my head in this.

NOTE: The projectile does NOT follow the mouse. It should save the coordinates and then head in that direction noting that it can also go past the given coordinates at the same rate.

Entity Creation

int[] mousePos = MouseManager.getCalculatedMouseCoordinates();
                float deltaX = mousePos[0] - GameManager.x;
                float deltaY = mousePos[1] - GameManager.y;
                float m = deltaY/deltaX;
                System.out.println(m);
                GameManager.currentWorld.addEntity(new EntityProjectile(GameManager.x, GameManager.y, 30, m, 50, "fireball"));

Projectile Class

    package UnNamedRpg.Player.Entity;

public class EntityProjectile {

    private double x, y;
    private int entityID = -1;
    private int speed;
    private double headerX, headerY;
    private int renderHeading;
    private double range, currentRange = 0;
    private String texture;
    private double factor = -1;
    public EntityProjectile(double startX, double startY, int speed, double headerX, double headerY, double range, String texture){
        setX(startX);
        setY(startY);
        setSpeed(speed);
        setHeaderX(headerX);
        setHeaderY(headerY);
        setTexture(texture);
        setRange(range);
    }

    public void doTick(){
        double vx = this.x - this.headerX;
        double vy = this.y - this.headerY;
        if(this.factor == -1){
            double length = Math.sqrt((vx*vx) + (vy*vy));
            double factor = this.speed / length;
            this.factor = factor;
        }
        vx *= factor;
        vy *= factor;
        this.x = vx;
        this.y = vy;
    }

    public int getSpeed() {
        return speed;
    }

    public void setSpeed(int speed) {
        this.speed = speed;
    }

    public int getRenderHeading() {
        return renderHeading;
    }

    public void setRenderHeading(int renderHeading) {
        this.renderHeading = renderHeading;
    }

    public int getEntityID() {
        return entityID;
    }

    public void setEntityID(int entityID) {
        this.entityID = entityID;
    }

    public double getX() {
        return x;
    }

    public void setX(double x) {
        this.x = x;
    }

    public double getY() {
        return y;
    }

    public void setY(double y) {
        this.y = y;
    }

    public String getTexture() {
        return texture;
    }

    public void setTexture(String texture) {
        this.texture = texture;
    }

    public double getRange() {
        return range;
    }

    public void setRange(double range) {
        this.range = range;
    }

    public double getCurrentRange() {
        return currentRange;
    }

    public void setCurrentRange(double currentRange) {
        this.currentRange = currentRange;
    }

    public double getHeaderX() {
        return headerX;
    }

    public void setHeaderX(double headerX) {
        this.headerX = headerX;
    }

    public double getHeaderY() {
        return headerY;
    }

    public void setHeaderY(double headerY) {
        this.headerY = headerY;
    }

    public double getFactor() {
        return factor;
    }

    public void setFactor(double factor) {
        this.factor = factor;
    }
}

Update Position Method

--Now called in EntityProjectile class every tick instead of it happening in the world tick.


Answer:

Moving towards a given point is relatively simple with some basic vector math. The vector you want to move along is calculated simply by coordinate subtraction:

vx = objectX - mouseX
vy = objectY - mouseY

But you probably want to move your object a little slower than bam there, so you need to scale the vector to a desired length (equals speed per game tick). The current length of the vector is obtained by the pythagorean sqrt(a * a + b * b). To scale the vector to a given length just multiply the components by the required factor:

double targetLength = 5.0; // chosen arbitrarily
double length = Math.sqrt(vx * vx + vy * vy);
double factor = targetLength / length;
vx *= factor;
vy *= factor;

There you have your speed components x,y to be used as delta per game tick. The 5.0 is the "speed" at which the object will move per tick.

EDIT: @Cyphereion About the length of the vector, thats geometrically speaking the base of a triangle, see https://en.wikipedia.org/wiki/Pythagorean_theorem (considered common knowlegde).

Once you have that, you just need to adjust the length of each component by figuring out a scaling factor that makes the base line come out as the desired "speed" length. The original values of the components (vx, vy) represent a vector (see: https://en.wikipedia.org/wiki/Euclidean_vector#Representations) encoding the direction to move to.

Scaling the vector's length adjusts the speed at which your object moves when you apply the vectors components as delta to its position (which is just vector addition). I swapped around the division length/targetLength initailly (now fixed), so the speed variable had a reversed meaning (larger = slower instead of larger = faster).

Question:

I'm trying to make a 2d collision detection system for my game in java/LWJGL/OpenGL.

My problem is the glReadPixels() acting strange and I don't know what am I doing wrong.

The problem is usually it gives back the perfect rgba code of the pixel, but somthimes it gives back negative numbers or colors that aren't on my screen. (For alpha I always get -1)

What can cause this problem?

My code:

    int size = 10;

    ByteBuffer pixels = BufferUtils.createByteBuffer(width * height * 4);

    glReadPixels(100, 500, size, size, GL_RGBA, GL_UNSIGNED_BYTE, pixels);

I'm thinking about maybe I'm using the bad parameters? ( GL_RGBA, GL_UNSIGNED_BYTE)

What should I use?


Answer:

If someone have the same problem, for me adding this line helped:

int red = (pixels.get(0) & 0xFF);

It get rid off the parts I don't need and now I get the precise color every time.

Question:

So ok. That OpenGL state machine is kiddin' me! I'm serious! Just few days ago all worked with immediate mode and even VBO mode, but not today! Today im seein a white quad, cause i rewrote 80% of my old code. Thats such a cool thing, yo know. So. I need your help guys. Here is my GL calls tracer output:

[17.02.2015 17:45:47] [--------GAME_STARTED--------]
[17.02.2015 17:45:49] [GLTrace] glGetBoolean(GL_COLOR_ARRAY) -> false
[17.02.2015 17:45:49] [GLTrace] glGetBoolean(GL_TEXTURE_2D) -> false
[17.02.2015 17:45:49] [GLTrace] glGetBoolean(GL_TEXTURE_COORD_ARRAY) -> false
[17.02.2015 17:45:49] [GLTrace] glGetBoolean(GL_VERTEX_ARRAY) -> false
[17.02.2015 17:45:49] [GLTrace] glGetBoolean(GL_INDEX_ARRAY) -> false
[17.02.2015 17:45:49] [GLTrace] glGetBoolean(GL_NORMAL_ARRAY) -> false
[17.02.2015 17:45:49] [GLTrace] glGetBoolean(GL_NORMALIZE) -> false
[17.02.2015 17:45:49] [GLTrace] glGetInteger(GL_MATRIX_MODE) -> GL_MODELVIEW
[17.02.2015 17:45:49] [GLTrace] glGetBoolean(GL_DEPTH_TEST) -> false
[17.02.2015 17:45:49] [GLTrace] glGetBoolean(GL_ALPHA_TEST) -> false
[17.02.2015 17:45:49] [GLTrace] glGetBoolean(GL_STENCIL_TEST) -> false
[17.02.2015 17:45:49] [GLTrace] glGetInteger(GL_DEPTH_FUNC) -> GL_LESS
[17.02.2015 17:45:49] [GLTrace] glGetInteger(GL_CULL_FACE_MODE) -> GL_BACK
[17.02.2015 17:45:49] [GLTrace] glGetBoolean(GL_BLEND) -> false
[17.02.2015 17:45:49] [GLTrace] glGetInteger(GL_ARRAY_BUFFER_BINDING) -> 0
[17.02.2015 17:45:49] [GLTrace] glGetInteger(GL_ELEMENT_ARRAY_BUFFER_BINDING) -> 0
[17.02.2015 17:45:49] [GLTrace] glGetInteger(GL_TEXTURE_BINDING_2D) -> GL_CURRENT_BIT
[17.02.2015 17:45:49] [GLTrace] glEnable(GL_DEPTH_TEST) -> DONE
[17.02.2015 17:45:49] [GLTrace] glDisable(GL_CULL_FACE) -> DONE
[17.02.2015 17:45:49] [GLTrace] glCullFace(Off) -> DONE
[17.02.2015 17:45:49] [GLTrace] glDepthFunc(LessOrEqual) -> DONE
[17.02.2015 17:45:49] [GLTrace] glBindBuffer(VertexArray, -1) -> DONE
[17.02.2015 17:45:49] [GLTrace] glBindBuffer(ElementArray, -1) -> DONE
[17.02.2015 17:45:49] [GLTrace] glBindTexture(Texture2D, -1) -> DONE
[17.02.2015 17:45:49] [GLTrace] glViewport(0, 0, 800, 600) -> DONE
[17.02.2015 17:45:49] [GLTrace] glGetInteger(GL_MAX_TEXTURE_SIZE, java.nio.DirectIntBufferU[pos=0 lim=16 cap=16]) -> DONE
[17.02.2015 17:45:49] [GLTrace] glGenTextures() -> 2
[17.02.2015 17:45:49] [GLTrace] glBindTexture(Texture2D, 2) -> DONE
[17.02.2015 17:45:49] [GLTrace] glTexParameteri(Texture2D, GL_TEXTURE_BASE_LEVEL, 0) -> DONE
[17.02.2015 17:45:49] [GLTrace] glTexParameteri(Texture2D, GL_TEXTURE_MAX_LEVEL, 0) -> DONE
[17.02.2015 17:45:49] [GLTrace] [Setting wrap mode: Clamp]
[17.02.2015 17:45:49] [GLTrace] glTexParameteri(Texture2D, GL_TEXTURE_WRAP_S, GL_CLAMP) -> DONE
[17.02.2015 17:45:49] [GLTrace] glTexParameteri(Texture2D, GL_TEXTURE_WRAP_T, GL_CLAMP) -> DONE
[17.02.2015 17:45:49] [GLTrace]  -> DONE
[17.02.2015 17:45:49] [GLTrace] [Setting filter mode: Nearest]
[17.02.2015 17:45:49] [GLTrace] glTexParameteri(Texture2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST) -> DONE
[17.02.2015 17:45:49] [GLTrace] glTexParameteri(Texture2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST) -> DONE
[17.02.2015 17:45:49] [GLTrace]  -> DONE
[17.02.2015 17:45:49] [GLTrace] glTexImage2D(Texture2D, 0, RGBA, 256, 256, 0, RGBA, UnsignedByte, java.nio.DirectByteBuffer[pos=0 lim=262144 cap=262144]) -> DONE
[17.02.2015 17:45:49] [GLTrace] glClear(ColorAndDepth) -> DONE
[17.02.2015 17:45:49] [GLTrace] glLoadIdentity() -> DONE
[17.02.2015 17:45:49] [GLTrace] glMatrixMode(Projection) -> DONE
[17.02.2015 17:45:49] [GLTrace] glLoadIdentity() -> DONE
[17.02.2015 17:45:49] [GLTrace] gluPerspective(70.0, 1.3333334, 0.001, 5000.0) -> DONE
[17.02.2015 17:45:49] [GLTrace] glMatrixMode(ModelView) -> DONE
[17.02.2015 17:45:49] [GLTrace] glPushMatrix() -> DONE
[17.02.2015 17:45:49] [GLTrace] glBegin(Quads) -> DONE
[17.02.2015 17:45:49] [GLTrace] glTexCoord2f(0.0, 0.0) -> DONE
[17.02.2015 17:45:49] [GLTrace] glVertex3f(0.0, 0.0) -> DONE
[17.02.2015 17:45:49] [GLTrace] glTexCoord2f(1.0, 0.0) -> DONE
[17.02.2015 17:45:49] [GLTrace] glVertex3f(1.0, 0.0) -> DONE
[17.02.2015 17:45:49] [GLTrace] glTexCoord2f(1.0, 1.0) -> DONE
[17.02.2015 17:45:49] [GLTrace] glVertex3f(1.0, 1.0) -> DONE
[17.02.2015 17:45:49] [GLTrace] glTexCoord2f(0.0, 1.0) -> DONE
[17.02.2015 17:45:49] [GLTrace] glVertex3f(0.0, 1.0) -> DONE
[17.02.2015 17:45:49] [GLTrace] glEnd() -> DONE
[17.02.2015 17:45:49] [GLTrace] glPopMatrix() -> DONE
[17.02.2015 17:45:49] [GLTrace] glLoadIdentity() -> DONE
[17.02.2015 17:45:49] [GLTrace] glMatrixMode(Projection) -> DONE
[17.02.2015 17:45:49] [GLTrace] gluOrtho2D(0.0, 800.0, 600.0, 0.0) -> DONE
[17.02.2015 17:45:49] [GLTrace] glMatrixMode(ModelView) -> DONE
[17.02.2015 17:45:49] [GLTrace] glTranslatef(0.375, 0.375, 0.0) -> DONE
[17.02.2015 17:45:49] [GLTrace] glDisable(GL_DEPTH_TEST) -> DONE
[17.02.2015 17:45:49] [GLTrace] glBindTexture(Texture2D, -1) -> DONE
[17.02.2015 17:45:49] [---------GAME_ENDED---------]

This is my immediate mode quad rendering - alls white! Why is it so? Im pretty sure slick-util loaded image correctly. Image is valid (tested just two days ago). There may be some changes i made (global rendering system refactoring :3)... Im not using slick-util Texture and TextureImpl. Also i'm not using mipmaps.

This is my implementation of that piece of code:

public class Texture
{

    protected int width, height, texWidth, texHeight, depth;
    protected boolean alpha;
    protected WrapMode wrapMode = WrapMode.Clamp;
    protected FilterMode filterMode = FilterMode.Nearest;
    protected TextureBuffer buffer;
    protected PixelFormat dstPixelFormat = PixelFormat.RGBA, srcPixelFormat;

    public Texture(LoadableImageData imageData, WrapMode wrapMode, FilterMode filterMode)
    {
        this.width = imageData.getWidth();
        this.height = imageData.getHeight();
        this.texWidth = imageData.getTexWidth();
        this.texHeight = imageData.getTexHeight();
        this.depth = imageData.getDepth();
        this.wrapMode = wrapMode;
        this.filterMode = filterMode;
        this.alpha = depth == 32;
        this.srcPixelFormat = alpha ? PixelFormat.RGBA : PixelFormat.RGB;

        buffer = (TextureBuffer) BufferManager.create(BufferType.Texture);
        BufferManager.setup(this, imageData.getImageBufferData(), srcPixelFormat.getSize());
    }

    public int getWidth()
    {
        return width;
    }

    public int getHeight()
    {
        return height;
    }

    public TextureBuffer getBuffer()
    {
        return buffer;
    }

    public WrapMode getWrapMode()
    {
        return wrapMode;
    }

    public FilterMode getFilterMode()
    {
        return filterMode;
    }

    public PixelFormat getDstPixelFormat()
    {
        return dstPixelFormat;
    }

    public PixelFormat getSrcPixelFormat()
    {
        return srcPixelFormat;
    }
}

This is reworked texture loader:

public class TextureLoader
{

    private static IntBuffer maxResolutionBuffer;

    public static LoadableImageData loadImage(String resourceName)
    {
        LoadableImageData imageData = ImageDataFactory.getImageDataFor(resourceName);
        ByteBuffer data = null;
        try {
            data = imageData.loadImage(new BufferedInputStream(new FileInputStream(new File("res", resourceName))),
                                       false, null);
        } catch (FileNotFoundException ex) {
            FaultManager.process("Can't find image!", ex, true);
        } catch (IOException ex) {
            FaultManager.process("Can't load image!", ex, true);
        }
        return imageData;
    }

    public static Texture getTexture(LoadableImageData imageData, WrapMode wrap, FilterMode filter) throws IOException
    {
        if (!checkTextureResolution(imageData.getTexWidth(), imageData.getTexHeight())) {
            throw new IOException("Attempt to allocate a texture too big for the current hardware");
        }
        return new Texture(imageData, wrap, filter);
    }

    public static boolean checkTextureResolution(int texWidth, int texHeight)
    {
        if (maxResolutionBuffer == null) {
            maxResolutionBuffer = BufferUtils.createIntBuffer(16);
            GLProxy.getProperty(GL11.GL_MAX_TEXTURE_SIZE, GLProxy.GLParamType.Integer, maxResolutionBuffer);
        }
        int max = maxResolutionBuffer.get(0);
        if ((texWidth > max) || (texHeight > max)) {
            return false;
        }
        return true;
    }
}

In my game code i'm just using:

        try {
            LoadableImageData imageData = TextureLoader.loadImage("test.png");
            texture = TextureLoader.getTexture(imageData, WrapMode.Clamp, FilterMode.Nearest);
        } catch (IOException ex) {
            FaultManager.process("Can't load texture test.png!", ex, true);
        }

All other classes implement their own stuff and i'm pretty sure yo can find that in GLTrace.


Answer:

Ok. That came up to work with immediate mode rendering and buffer objects. That also happened like magic. Immediate mode rendering was fixed by enabling GL_TEXTURE_2D (blame myself) and disabling GL_VERTEX_ARRAY and GL_INDEX_ARRAY. VBO was fixed after i just tried this time. I think that was my state changed bug, cause textures and arrays was enabled by default before. So i'm not sure how i fixed VBO this time, but i'm proud to say that "stackoverflow" still have some fresh minds! Gratz to @Jean-SimonBrochu!

MAIN PROBLEM OF OpenGL It's made to use with functional programming, or state programming. This means OpenGL context (that you create with Display) will make yo crying everytime yo miss one call or pass wrong data. First - if not critical it just can work even without setting error state. Second - yo JVM will be smashed by JNI cause of crash in native code (C++/Asm).

This is my short advices how to handle this:

  1. Control everything in OOP (You should implement classes that would control proccesses corresponding to any GL calls or such)
  2. Control GL state (You should implement a data/option container class, which would contain abstract or current OpenGL state)
  3. Use glGetError every time you unsure what the heck is goin on. You should use this function right after suspicious call to OpenGL.
  4. Always build understandable system. ALWAYS! Even if you doing your project for yourself. This costs time of people who may try to help you to understand your shitworks.
  5. Handle all parts of work you can access yourself. Working on implementing this will show you how the whole system works and help you adapt your thoughts about your system to this.

TEXTURES NOT WORKING? CHECK THIS!

  1. Check that you enable GL_TEXTURE_2D (or GL_TEXTURE in really old versions). You MUST be sure you aint disabling that any time later!
  2. Check all GL state options corresponding to your chosen one rendering mode. I mean if you're using VBO, you should also enable GL_VERTEX_ARRAY and such you use.
  3. Check that your texture coordinates (texCoords) are properly set and texture coordinates array option enabled.
  4. Check that you generating mipmaps or not using them at all. By default, OpenGL state is set to use mipmaps. You can switch this by taking right texture filter mode (GL_NEAREST/GL_LINEAR or GL_NEAREST_MIPMAP_NEAREST/GL_LINEAR_MIPMAP_LINEAR).
  5. Check that if your image use alpha channel, you set GL_BLEND on and set blending mode to Alpha through GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA.
  6. Check that normals are correct on your mesh (if you use them).
  7. Finally check your incoming data (image). Thats hard, but if you follow p.5 of advices above - that should be fairly easy.

Thanks. Tryin' to be useful and came out with more pluses for answer rather than minuses for question :)

That was my problem in GLState class:

-   public boolean depthTest = true;
+   public boolean depthTest = false;
-   public boolean vertexArray = true, indexArray = true, colorArray = false, textureArray = false, normalArray = false;
+   public boolean vertexArray, indexArray, colorArray, textureArray, normalArray;

static {
+       DEFAULT_3D.depthTest = true;
+       DEFAULT_3D.vertexArray = true;
+       DEFAULT_3D.indexArray = true;
+
+       DEBUG_3D.depthTest = true;
+       DEBUG_3D.vertexArray = true;
+       DEBUG_3D.indexArray = true;
        DEBUG_3D.faceCullingMode = FaceCullingMode.Off;
}

P.S. Explanation for LWJGL 2.9.1