Hot questions for Using Lightweight Java Game Library in java 8

Question:

Note to future visitors: If you use the ftransform() method, you MUST bind the vertex data to attribute 0.

I am attempting to create a 3D terrain, which successfully looks like this:

However, when I switch to java 8, the game instead looks like this:

Long story short, the texturing is completely messed up, and I have absolutely no idea why. The terrain colour appears to be the same colour as the top left pixel of the texture.

I compile my shaders like this:

private static int loadShader(String file, int type){
    StringBuilder shaderSource = new StringBuilder();
    try{
        BufferedReader reader = new BufferedReader(new FileReader(file));
        String line;
        while((line = reader.readLine())!=null){
            shaderSource.append(line).append("\n");
        }
        reader.close();
    }catch(IOException e){
        e.printStackTrace();
        System.exit(-1);
    }
    int shaderID = GL20.glCreateShader(type);
    GL20.glShaderSource(shaderID, shaderSource);
    GL20.glCompileShader(shaderID);
    if(GL20.glGetShaderi(shaderID, GL20.GL_COMPILE_STATUS )== GL11.GL_FALSE){
        System.out.println(GL20.glGetShaderInfoLog(shaderID, 500));
        System.err.println("Could not compile shader!");
        System.exit(-1);
    }
    return shaderID;
}

The shaders always successfully compile when using both java versions.

After attaching the shader, I then bind the shader attributes:

@Override
protected void bindAttributes() {
    int loc = 0;
    super.bindAttribute(GL20.glGetAttribLocation(loc, "position"), "position");
    super.bindAttribute(GL20.glGetAttribLocation(loc, "textureCoords"), "textureCoords");
    super.bindAttribute(GL20.glGetAttribLocation(loc, "normal"), "normal");
}

protected void bindAttribute(int attribute, String variableName){
    GL20.glBindAttribLocation(programID, attribute, variableName);
}

Although for some reason the "attribute" was always set to the same number, changing it does not appear to affect anything anywhere.

I'm then simply forwarding these coordinates onto the fragment shader:

#version 130

in vec3 position;
in vec2 textureCoords;
in vec3 normal;

out vec2 pass_textureCoords;
out vec3 surfaceNormal;
out vec3 toLightVector;

uniform mat4 transformationMatrix;
uniform vec3 lightPosition;

void main(void){
    gl_Position = ftransform(); //I prefer the fixed-function pipeline. I may change some time. However, I don't believe it to be the cause of my issue.
    pass_textureCoords = textureCoords;
    surfaceNormal = (transformationMatrix * vec4(normal, 0.0)).xyz;
    toLightVector = (vec3(500, 50000, 500)) - (transformationMatrix * vec4(position, 1.0)).xyz;
}

#version 130

in vec2 pass_textureCoords;
in vec3 surfaceNormal;
in vec3 toLightVector;

out vec4 out_Color;

uniform sampler2D textureSampler;

uniform vec3 lightColour;

void main(void){
    vec3 unitNormal = normalize(surfaceNormal);
    vec3 unitLightVector = normalize(toLightVector);

    float nDot1 = dot(unitNormal, unitLightVector);
    float brightness = max(nDot1, 0.2);
    brightness = brightness + 0.5;
    vec3 diffuse = brightness * vec3(1, 1, 1);
    vec4 text = texture(textureSampler, pass_textureCoords);
    vec4 textureColor = vec4(diffuse, 1.0) * text;
    if(textureColor.a<0.01){
        discard;
    }
    out_Color = vec4(textureColor.r, textureColor.g, textureColor.b, textureColor.a);

}

I am clueless as to what the issue is. I do not have that much experience with the internals of GLSL and shaders yet, and I do not even know when the problem started to turn up, because I only noticed the issue after I started to send the game to others (I run on java 7).

I have tested the game using Jon Skeet's method of changing versions of java and ran the exact same JAR files side by side, one with java 8 and one with java 7. Java 8 is definitely the deciding factor that is causing the issue.

Update: This code -

System.out.println("Err1: " + GL11.glGetError());
    vertexShaderID = loadShader(vertexFile,GL20.GL_VERTEX_SHADER);
    fragmentShaderID = loadShader(fragmentFile,GL20.GL_FRAGMENT_SHADER);
    programID = GL20.glCreateProgram();
    System.out.println("Err2: " + GL11.glGetError());
    GL20.glAttachShader(programID, vertexShaderID);
    GL20.glAttachShader(programID, fragmentShaderID);
    System.out.println("Err3: " + GL11.glGetError());
    bindAttributes();
    System.out.println("Err4: " + GL11.glGetError());
    GL20.glLinkProgram(programID);
    GL20.glValidateProgram(programID);
    System.out.println("Err5: " + GL11.glGetError());
    start();
    System.out.println("Err6: " + GL11.glGetError());
    System.out.println("Comiled shader!");

Prints out the following result:

Comiling shader!
Err1: 0
Error: 
Error: 
Err2: 0
Err3: 0
Err4: 1282 //This equals either GL_INVALID_VALUE or GL_INVALID_OPERATION
Err5: 0
Err6: 0
Comiled shader!

UPDATE: In light of the first answer I received, I have modified my code to this:

@Override
protected void bindAttributes() {
    super.bindAttribute(1, "position");
    super.bindAttribute(2, "textureCoords");
    super.bindAttribute(3, "normal");
}

And now, this code does not print out any errors:

public ShaderProgram(String vertexFile,String fragmentFile){
    System.out.println("Comiling shader!");
    System.out.println("Err1: " + GL11.glGetError());
    vertexShaderID = loadShader(vertexFile,GL20.GL_VERTEX_SHADER);
    fragmentShaderID = loadShader(fragmentFile,GL20.GL_FRAGMENT_SHADER);
    programID = GL20.glCreateProgram();
    System.out.println("Err2: " + GL11.glGetError());
    GL20.glAttachShader(programID, vertexShaderID);
    GL20.glAttachShader(programID, fragmentShaderID);
    System.out.println("Err3: " + GL11.glGetError());
    bindAttributes();
    System.out.println("Err4: " + GL11.glGetError());
    GL20.glLinkProgram(programID);
    GL20.glValidateProgram(programID);
    System.out.println("Err5: " + GL11.glGetError());
    start();
    System.out.println("Err6: " + GL11.glGetError());
    System.out.println("Comiled shader!");
}
Comiling shader!
Err1: 0
Error: 
Error: 
Err2: 0
Err3: 0
Err4: 0
Err5: 0
Err6: 0
Comiled shader!
Made StaticShader!
Made shaders!

Also, I am definitely binding the VAO data to the correct attributes:

public RawModel loadToVAO(float[] positions,float[] textureCoords,float[] normals,int[] indices, ArrayList<Vector3f> vertices){
    int vaoID = createVAO();
    bindIndicesBuffer(indices);
    storeDataInAttributeList(1 /*attribute*/,3,positions);
    storeDataInAttributeList(2/*attribute*/,2,textureCoords);
    storeDataInAttributeList(3/*attribute*/,3,normals);
    unbindVAO();
    return new RawModel(vaoID,indices.length, vertices);
}

However, it simply keeps crashing when the game starts to try and render anything. Here's the crash report.

Here is my rendering code:

GL30.glBindVertexArray(downgrade.getVaoID());
        GL20.glEnableVertexAttribArray(0);
        GL20.glEnableVertexAttribArray(1);
        GL20.glEnableVertexAttribArray(2);
        GL20.glEnableVertexAttribArray(3);
        glBindTexture(GL_TEXTURE_2D, textureID);
        GL13.glActiveTexture(GL13.GL_TEXTURE0);
        glTranslatef((float) x, 0f,(float) z);
        glScalef(1, 4, 1);
        glDrawElements(GL_TRIANGLES, model.getVertexCount(), GL11.GL_UNSIGNED_INT, 0);

For those wondering, I am missing attribute "0" because attribute 0 is for the fixed-function pipeline, and it throws a GL_INVALID_VALUE if I bind that attribute.


Answer:

I doubt that the bindAttributes() function would work reliably in any Java version. Maybe you were just fortunate that the attribute locations ended up being what you expected them to be.

Looking at this code:

protected void bindAttributes() {
    int loc = 0;
    super.bindAttribute(GL20.glGetAttribLocation(loc, "position"), "position");
    super.bindAttribute(GL20.glGetAttribLocation(loc, "textureCoords"), "textureCoords");
    super.bindAttribute(GL20.glGetAttribLocation(loc, "normal"), "normal");
}

protected void bindAttribute(int attribute, String variableName){
    GL20.glBindAttribLocation(programID, attribute, variableName);
}

You're calling glGetAttribLocation() with 0 as the first argument. The first argument needs to be the id of a shader program. So these calls will result in an error.

Taking a step back, I think you misunderstood what these calls do:

  1. glBindAttribLocation() can be used to set the attribute locations before the shader program is linked.
  2. glGetAttribLocation() can be used to get the (automatically assigned) attribute locations after the shader program was linked.

You normally use one of these two, not both. If you want to specify the locations, use the first approach. If you want to let the shader compiler/linker assign locations, and get the resulting locations, you use the second approach.

Since you call bindAttributes() before glLinkProgram(), it looks like you're going for approach 1. So you should only be calling glBindAttribLocation(). Say if you wanted to assign location values sequentially, you could use something like:

int loc = 0;
GL20.glBindAttribLocation(programID, loc++, "position");
GL20.glBindAttribLocation(programID, loc++, "textureCoords");
GL20.glBindAttribLocation(programID, loc++, "normal");

The actual locations you specify have to match what you're using elsewhere in your code, particularly as the first arguments to glEnableVertexAttribArray() and glVertexAttribPointer().

Question:

I am trying to use a CompletableFuture<T> to respond to a LWJGL OpenGL context being created. This is done by calling the open method on LWJGLGameWindow. Here is the concerning code:

  @Override
  public CompletableFuture<?> open() {
    CompletableFuture<Void> future = new CompletableFuture<>();

    scheduledExecutorService.schedule(() -> {
      future.completeExceptionally(new TimeoutException("Could not establish contact with LWJGL"));
    }, 2000, TimeUnit.MILLISECONDS);

    scheduledExecutorService.execute(() -> {
      try {
        display.setDisplayMode(new DisplayMode(defaultWidth, defaultHeight));
        display.create();
        future.complete(null);
      } catch (LWJGLException e) {
        future.completeExceptionally(e);
      }
    });
    return future;
  }

The idea is to defer the creation of a display on a scheduled executor service. This is set up to be a single threaded scheduled executor service, because OpenGL contexts are thread-bound. If it takes too long to connect to LWJGL, then the returned future will break out of itself early.

The problem is that in unit tests, this works absolutely swimmingly. However, when I try and debug the program, any call to any of the display methods results in a real exception being thrown by lwjgl (because my library for lwjgl is not linked. This is still thrown as a LwjglException, though). For some reason, this exception is not picked up from the try-catch in this code here, and instead the exception is swallowed; the future never gets completed exceptionally.

So somewhere along the line, my exception is being swallowed in this code.

NB: display is simply a interface around LWJGL's Display - no fancy magic going on there. scheduledExecutorService is a single threaded scheduled executor.

I also appreciate that .submit() and schedule on scheduledExecutorService both return Future<T> but this lacks the composition I would like to use from CompletableFuture<T>. I'd like to be able to keep using that if at all possible.


Answer:

The code actually works exactly as it should. The real problem is that the error I was expecting, java.lang.UnsatisifiedLinkError, is not an Exception but actually an Error. Amending the code to catch a Throwable solves this issue.

Question:

I've been trying to render an 8x8 texture. I've used code from 2 tutorials, but the texture doesn't render correctly. For now I have this initialization code:

int shaderProgram,fragmentShader,vertexShader,texture,elementBuffer,vertexBuffer, vertexArray;

public Texture2D(String texturePath_, String vertexShader_,String fragmentShader_)
{
    vertexArray=GL30.glGenVertexArrays();
    GL30.glBindVertexArray(vertexArray);
    String[] vertexshader=Utilities.loadShaderFile(vertexShader_,getClass());
    String[] fragmentshader=Utilities.loadShaderFile(fragmentShader_,getClass());
    if(vertexshader==null)
        throw new NullPointerException("The vertex shader is null");
    if(fragmentshader==null)
        throw new NullPointerException("The fragment shader is null");
    vertexShader=GL20.glCreateShader(GL20.GL_VERTEX_SHADER);
    GL20.glShaderSource(vertexShader,vertexshader);
    GL20.glCompileShader(vertexShader);
    Utilities.showShaderCompileLog(vertexShader);

    fragmentShader=GL20.glCreateShader(GL20.GL_FRAGMENT_SHADER);
    GL20.glShaderSource(fragmentShader,fragmentshader);
    GL20.glCompileShader(fragmentShader);
    Utilities.showShaderCompileLog(fragmentShader);

    shaderProgram= GL20.glCreateProgram();
    GL20.glAttachShader(shaderProgram,fragmentShader);
    GL20.glAttachShader(shaderProgram,vertexShader);

    GL30.glBindFragDataLocation(shaderProgram,0,"fragcolor");
    GL20.glLinkProgram(shaderProgram);
    GL20.glUseProgram(shaderProgram);
    texture= GL11.glGenTextures();
    GL11.glBindTexture(GL11.GL_TEXTURE_2D,texture);
    GL11.glTexParameteri(GL11.GL_TEXTURE_2D,GL11.GL_TEXTURE_WRAP_S, GL13.GL_CLAMP_TO_BORDER);
    GL11.glTexParameteri(GL11.GL_TEXTURE_2D,GL11.GL_TEXTURE_WRAP_T,GL13.GL_CLAMP_TO_BORDER);
    GL11.glTexParameteri(GL11.GL_TEXTURE_2D,GL11.GL_TEXTURE_MIN_FILTER,GL11.GL_LINEAR);
    GL11.glTexParameteri(GL11.GL_TEXTURE_2D,GL11.GL_TEXTURE_MAG_FILTER,GL11.GL_LINEAR);

    ByteBuffer image;
    FloatBuffer verteces;
    IntBuffer imagewidth,imageheight, positions,imagechannels;
    try(MemoryStack memoryStack=MemoryStack.stackPush())
    {
        imageheight=memoryStack.mallocInt(1);
        imagewidth=memoryStack.mallocInt(1);
        positions=memoryStack.mallocInt(6);
        imagechannels=memoryStack.mallocInt(1);
        image= STBImage.stbi_load(texturePath_,imagewidth,imageheight,imagechannels,0);
        if(image==null) throw new NullPointerException("Failed to load image");
        verteces=memoryStack.mallocFloat(28);
    }
    positions.put(0).put(1).put(2).put(2).put(3).put(0).flip();
    int width=imagewidth.get();
    int height=imageheight.get();
    GL11.glTexImage2D(GL11.GL_TEXTURE_2D,0,GL11.GL_RGBA,width,height,0,GL11.GL_RGBA,GL11.GL_UNSIGNED_BYTE,image);


    elementBuffer=GL15.glGenBuffers();
    GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER,elementBuffer);
    GL15.glBufferData(GL15.GL_ELEMENT_ARRAY_BUFFER,positions,GL15.GL_STATIC_DRAW);
    float x1=0f, x2=1f;
    float y1=1f,y2=-1f;

    verteces.put(x1).put(y1).put(1).put(1).put(1).put(0).put(0);
    verteces.put(x1).put(y2).put(1).put(1).put(1).put(1).put(0);
    verteces.put(x2).put(y2).put(1).put(1).put(1).put(1).put(1);
    verteces.put(x2).put(y1).put(1).put(1).put(1).put(0).put(1).flip();

    vertexBuffer=GL15.glGenBuffers();
    GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER,vertexBuffer);
    GL15.glBufferData(GL15.GL_ARRAY_BUFFER,verteces,GL15.GL_STATIC_DRAW);
    int uniform=GL20.glGetUniformLocation(shaderProgram,"texture_image");
    GL20.glUniform1i(uniform,0);
    int position=GL20.glGetAttribLocation(shaderProgram,"position");
    GL20.glEnableVertexAttribArray(position);
    GL20.glVertexAttribPointer(position,2,GL11.GL_FLOAT,false,0,0);
    int color=GL20.glGetAttribLocation(shaderProgram,"color");
    GL20.glEnableVertexAttribArray(color);
    GL20.glVertexAttribPointer(color,3,GL11.GL_FLOAT,false,7*Float.BYTES, 2 * Float.BYTES);
    int textureST=GL20.glGetAttribLocation(shaderProgram,"textureCoord");
    GL20.glEnableVertexAttribArray(textureST);
    GL20.glVertexAttribPointer(textureST,3,GL11.GL_FLOAT,false,7*Float.BYTES,  5 * Float.BYTES);
    Utilities.showErrors(1);
}

The result is:

But I'd like the texture to occupy all area. The shaders compile fine, and there are no GL errors. If I change values to the ones from the tutorial:

verteces.put(-1f).put(1f).put(1).put(1).put(1).put(0).put(0);
verteces.put(1f).put(1f).put(1).put(1).put(1).put(1).put(0);
verteces.put(1f).put(-1f).put(1).put(1).put(1).put(1).put(1);
verteces.put(-1f).put(-1f).put(1).put(1).put(1).put(0).put(1).flip();

I get:

The tutorials: https://open.gl/textures and https://github.com/SilverTiger/lwjgl3-tutorial/wiki/Textures

I'm using profile 3.0 with shaders version 300 ES. The texture's format is PNG.


Answer:

The vertex attribute layout:

GL20.glVertexAttribPointer(position,2,GL11.GL_FLOAT,false,0,0);
GL20.glVertexAttribPointer(color,3,GL11.GL_FLOAT,false,7*Float.BYTES, 2 * Float.BYTES);
GL20.glVertexAttribPointer(textureST,3,GL11.GL_FLOAT,false,7*Float.BYTES,  5 * Float.BYTES);

doesn't look correct. There are multiple problems with it:

  • The texture coordinates try to read 3 floats from the array. In combination with the stride, your last vertex will read outside the VBO. Most probably texture coordinates should only read 2 floats.
  • The total number of floats used (2+3+3=8) does not fit to the data where only 7 floats per vertex are given. This is solved when texture coordinates read only two floats.
  • The stride of the positions look wrong. 0 means that all positions are tightly packed. Basically, the positions use the first 8 floats in the VBO. If you look at them: {-1, 1, 1, 1, 1, 0, 0, 1}, then this is exactly the geometry you see. It was just luck that it worked in first place. Solution: Change position layout to: GL20.glVertexAttribPointer(position,2,GL11.GL_FLOAT,false,7*Float.BYTES,0);