Hot questions for Using Lightweight Java Game Library in opengl 3

Question:

I am currently trying to create an OpenGL 3.3 context in LWJGL 3 on my Macbook Pro mid 2014. My sourcecode to initialize the window looks like this:

if (!glfwInit()) {
    Logger.addError("GLFW", "init failed");
}

glfwDefaultWindowHints();

glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);           
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_CORE_PROFILE, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);

// glfwWindowHint(GLFW_SAMPLES, 2);

ID = glfwCreateWindow(width, height, title, 0, 0);
if (ID == 0) {
    Logger.addError("GLFW", "window creation failed");
}

Sadly GLFW fails to create the window for any version higher than 2.1, the version glGetString(GL_VERSION) returns when leaving out the window hints ... I read through all of the "duplicate" questions, but as you can see I already request core profile and forward compatibility. Moreover I installed XCode and have the newest version of the operating system. Do you guys have any other suggestions or did I understand anything horribly wrong? Thanks in advance...

Note that there is no "GLFW_OPENGL_PROFILE" flag in LWJGL 3 afaik, so I can't copy the code from the official GLFW getting started page 1:1. Setting the "GLFW_OPENGL_CORE_PROFILE" flag to true worked on windows though thus this cant be the error making trouble...


Answer:

The way you are setting the core profile window hint is incorrect. Instead use:

glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);

From the GLFW documentation:

GLFW_OPENGL_PROFILE specifies which OpenGL profile to create the context for. Possible values are one of GLFW_OPENGL_CORE_PROFILE or GLFW_OPENGL_COMPAT_PROFILE, or GLFW_OPENGL_ANY_PROFILE to not request a specific profile. If requesting an OpenGL version below 3.2, GLFW_OPENGL_ANY_PROFILE must be used. If OpenGL ES is requested, this hint is ignored.

Question:

So i started to learn lwjgl recently, and quickly realized that I need to improve in OpenGL to continue. I followed tutorial here and tried to implement same code using java and lwjgl (code in tutorial is in C++). I successfully managed to replicate OpenGL calls from tutorial using lwjgl api, but when I start my program i see only black screen and nothing else =( No triangle, nothing. No errors in console ether.

My Window.java class where i perform OpenGL rendering (Never mind Spring annotations, i'm just using IoC container to manage objects, method run() is called after Spring context creation):

import com.gitlab.cvazer.dnd.map.desktop.ashlesy.Orchestrator;
import com.gitlab.cvazer.dnd.map.desktop.render.Camera;
import com.gitlab.cvazer.dnd.map.desktop.util.Shaders;
import lombok.Getter;
import lombok.extern.slf4j.Slf4j;
import org.lwjgl.Version;
import org.lwjgl.glfw.GLFWErrorCallback;
import org.lwjgl.opengl.GL;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import java.io.IOException;
import java.util.Objects;

import static org.lwjgl.glfw.Callbacks.glfwFreeCallbacks;
import static org.lwjgl.glfw.GLFW.*;
import static org.lwjgl.opengl.GL30.*;
import static org.lwjgl.system.MemoryUtil.NULL;

@Slf4j
@Component
public class Window {
    private @Autowired Shaders shaders;
    private @Getter long window;

    //START HERE!
    public void run() {
        new Thread(() -> {
            log.info("Hello LWJGL " + Version.getVersion() + "!");
            init();
            try {
                loop();
            } catch (IOException e) {
                e.printStackTrace();
            }
            glfwFreeCallbacks(window);
            glfwDestroyWindow(window);
            glfwTerminate();
            Objects.requireNonNull(glfwSetErrorCallback(null)).free();
        }).start();
    }

    private void init() {
        GLFWErrorCallback.createPrint(System.err).set();
        if ( !glfwInit() ) throw new IllegalStateException("Unable to initialize GLFW");
        glfwDefaultWindowHints();
        glfwWindowHint(GLFW_VISIBLE, GLFW_FALSE);
        glfwWindowHint(GLFW_RESIZABLE, GLFW_TRUE);
        glfwWindowHint(GLFW_SAMPLES, 8);
        glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
        glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
        glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
        window = glfwCreateWindow(800, 600, "Hello World!", NULL, NULL);
        if ( window == NULL ) throw new RuntimeException("Failed to create the GLFW window");
        callbacks();
        glfwMakeContextCurrent(window);
        glfwSwapInterval(1);
        glfwShowWindow(window);
    }

    private void loop() throws IOException {
        GL.createCapabilities();

        int vertexArray = glGenVertexArrays();
        glBindVertexArray(vertexArray);

        int vertexBuffer = glGenBuffers();
        glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);

        float[] data = {-1.0f, -1.0f, 0.0f,
                1.0f, -1.0f, 0.0f,
                0.0f,  1.0f, 0.0f};

        glBufferData(GL_ARRAY_BUFFER, data, GL_STATIC_DRAW);

        int program = shaders.loadShaders("vertex.glsl", "fragment.glsl");

        glClearColor(0.0f, 0.0f, 0.0f, 0.0f);

        long lastTime = System.currentTimeMillis();
        while ( !glfwWindowShouldClose(window) ) {
            long delta = System.currentTimeMillis() - lastTime;
            lastTime = System.currentTimeMillis();

            glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
            glUseProgram(program);

            glEnableVertexAttribArray(0);
            glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
            glVertexAttribPointer(0,3,GL_FLOAT, false, 0, vertexBuffer);
            glDrawArrays(GL_TRIANGLES, 0, 3);
            glDisableVertexAttribArray(0);

            glfwSwapBuffers(window); // swap the color buffers
            glfwPollEvents();
        }
    }
}

Shaders.java class:

import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;

import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;

import static org.lwjgl.opengl.GL30.*;

@Slf4j
@Service
public class Shaders {
    public int loadShaders(String vertexFilePath, String fragmentFilePath) throws IOException {
        int vertexShader = glCreateShader(GL_VERTEX_SHADER);
        int fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);

        String vertexCode = Files.lines(Paths.get(vertexFilePath))
                .collect(Collectors.joining("\n"));

        String fragmentCode = Files.lines(Paths.get(fragmentFilePath))
                .collect(Collectors.joining("\n"));

        compileShader(vertexShader, vertexCode);
        compileShader(fragmentShader, fragmentCode);

        int program = glCreateProgram();
        glAttachShader(program, vertexShader);
        glAttachShader(program, fragmentShader);
        glLinkProgram(program);

        glDetachShader(program, vertexShader);
        glDetachShader(program, fragmentShader);

        glDeleteShader(vertexShader);
        glDeleteShader(fragmentShader);

        return program;
    }

    private void compileShader(int shader, String code){
        glShaderSource(shader, code);
        glCompileShader(shader);
        String slog = glGetShaderInfoLog(shader);
        if (slog.contentEquals("")) return;
        log.info(slog);
    }
}

vertex.glsl file:

#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
void main(){
  gl_Position.xyz = vertexPosition_modelspace;
  gl_Position.w = 1.0;
}

fragment.glsl file:

#version 330 core
out vec3 color;
void main(){
  color = vec3(1,0,0);
}

I followed part 1 (coding OpenGL calls) and 2 (coding GLSL shaders, loading them) of this tutorial, but adding shaders didn't fixed my problem

I don't think that googling further can give me answers since almost all tutorials online on topic of lwjgl use OpenGL 1 for rendering.

How can i make this work?


Answer:

The issue is the line

glVertexAttribPointer(0,3,GL_FLOAT, false, 0, vertexBuffer);

If a named buffer object is bound then the last parameter of glVertexAttribPointer is treated as a byte offset into the buffer object's data store.

When you use glVertexAttribPointer then you don't have to specify the vertex buffer by a parameter. The function associates the vertex attribute to the buffer object, which is currently bound to the target GL_ARRAY_BUFFER.

It has to be:

glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0);

See also Java Code Examples for org.lwjgl.opengl.GL20.glVertexAttribPointer()

Question:

So I'm currently trying to replace my old texture atlas stitcher with a 2D texture array to make life simpler with anisotropic filtering and greedy meshing later on.

I'm loading the png files with stb and I know that the buffers are filled properly because if I export every single layer of the soon to be atlas right before uploading it it's the correct png file.

My setup works like this:

I'm loading every single texture in my jar file with stb and create an object with it that stores the width, height, layer and pixelData in it.

When every texture is loaded i look for the biggest texture and scale every smaller texture to the same size as the biggest because i know that 2D texture arrays only work if every single one of the layers has the same size.

Then I initialize the 2d texture array like this:

public void init(int layerCount, boolean supportsAlpha, int textureSize) {
    this.textureId = glGenTextures();
    this.maxLayer = layerCount;

    int internalFormat = supportsAlpha ? GL_RGBA8 : GL_RGB8;
    this.format = supportsAlpha ? GL_RGBA : GL_RGB;

    glBindTexture(GL_TEXTURE_2D_ARRAY, this.textureId);
    glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, internalFormat, textureSize, textureSize, layerCount, 0, this.format, GL_UNSIGNED_BYTE, 0);
}

After that i go through my map of textureLayer objects and upload every single one of them like this:

public void upload(ITextureLayer textureLayer) {
    if (textureLayer.getLayer() >= this.maxLayer) {
        LOGGER.error("Tried uploading a texture with a too big layer.");
        return;
    } else if (this.textureId == 0) {
        LOGGER.error("Tried uploading texture layer to uninitialized texture array.");
        return;
    }

    glBindTexture(GL_TEXTURE_2D_ARRAY, this.textureId);

    // Tell openGL how to unpack the RGBA bytes
    glPixelStorei(GL_UNPACK_ALIGNMENT, 1);


    // Tell openGL to not blur the texture when it is stretched
    glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

    // Upload the texture data
    glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, textureLayer.getLayer(), textureLayer.getWidth(), textureLayer.getHeight(), 0, this.format, GL_UNSIGNED_BYTE, textureLayer.getPixels());

    int errorCode = glGetError();
    if (errorCode != 0) LOGGER.error("Error while uploading texture layer {} to graphics card. {}", textureLayer.getLayer(), GLHelper.errorToString(errorCode));
}

The error code for every single one of my layers is 0, so I assume that everything went well. But when I debug the game with RenderDoc I can see that on every single layer every bit is 0 and therefore it's just a transparent texture with the correct width and height.

I can't figure out what I'm doing wrong since openGL tells me everything went well. It is important to me that I only use openGL 3.3 and lower since I want the game to be playable on older PCs aswell so pre allocating memory with glTexStorage3D is not an option.


Answer:

The 8th paramter of glTexSubImage3D should be 1 (depth). Note, the size of the layer is textureLayer.getWidth(), textureLayer.getHeight(), 1:

glTexSubImage3D(
    GL_TEXTURE_2D_ARRAY, 0, 0, 0, textureLayer.getLayer(),
    textureLayer.getWidth(), textureLayer.getHeight(), 1,    // depth is 1
    this.format, GL_UNSIGNED_BYTE, textureLayer.getPixels());

It is not an error to pass a width, height or depth of 0 to glTexSubImage3D, but it won't have any effect to the texture objects data store.

Question:

I have been trying to follow the code as specified in this tutorial on OpenGL3+ textures, but my result ends up black instead of the texture.

I am using stbimage to load the image the texture uses into a direct ByteBuffer and can guarantee the RGB data in the buffer is, at least, not uniform - so it can't be that.

I usually do not like to dump code, but I don't see much else I can do at this point. Here's my java code and shaders:

GL is an interface pointing to all the GL## functionality in LWJGL31.

ShaderProgram wraps all the shader specific stuff into a nice blackbox that generates a shaderprogram from the attached shaders on the first call of use(GL) and subsequently reuses that program. This works just fine for rendering a coloured triangle, so I rule out any errors in there.

Util.checkError(GL, boolean); does check for any OpenGL errors that have accumulated since its last execution and throws a runtime exception if the boolean is not set (silently writes to the log instead, if set).

The rendering code, update(GL, long) is run once every frame

private static final ResourceAPI res = API.get(ResourceAPI.class);

Image lwjgl32;

ShaderProgram prog = new ShaderProgram();
int vbo, vao, ebo;
int texture;

@Override
public void init(GL gl) {

    try {
        prog.attach(res.get("shaders/texDemo.vert", ShaderSource.class));
        prog.attach(res.get("shaders/texDemo.frag", ShaderSource.class));
        lwjgl32 = res.get("textures/lwjgl32.png", Image.class);
    } catch(ResourceException e) {
        throw new RuntimeException(e);
    }

    float[] vertices = {
        // positions          // colors           // texture coords
         0.5f,  0.5f, 0.0f,   1.0f, 0.0f, 0.0f,   1.0f, 1.0f, // top right
         0.5f, -0.5f, 0.0f,   0.0f, 1.0f, 0.0f,   1.0f, 0.0f, // bottom right
        -0.5f, -0.5f, 0.0f,   0.0f, 0.0f, 1.0f,   0.0f, 0.0f, // bottom left
        -0.5f,  0.5f, 0.0f,   1.0f, 1.0f, 0.0f,   0.0f, 1.0f  // top left 
    };

    int[] indices = {
        0, 1, 3, // first triangle
        1, 2, 3  // second triangle
    };

    vao = gl.glGenVertexArrays();
    vbo = gl.glGenBuffers();
    ebo = gl.glGenBuffers();

    gl.glBindVertexArray(vao);

    gl.glBindBuffer(GL.GL_ARRAY_BUFFER, vbo);
    gl.glBufferData(GL.GL_ARRAY_BUFFER, vertices, GL.GL_STATIC_DRAW);

    gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, ebo);
    gl.glBufferData(GL.GL_ELEMENT_ARRAY_BUFFER, indices, GL.GL_STATIC_DRAW);

    gl.glVertexAttribPointer(0, 3, GL.GL_FLOAT, false, 8 * Float.BYTES, 0);
    gl.glEnableVertexAttribArray(0);

    gl.glVertexAttribPointer(1, 3, GL.GL_FLOAT, false, 8 * Float.BYTES, 3 * Float.BYTES);
    gl.glEnableVertexAttribArray(0);

    gl.glVertexAttribPointer(2, 2, GL.GL_FLOAT, false, 8 * Float.BYTES, 6 * Float.BYTES);
    gl.glEnableVertexAttribArray(0);

    texture = gl.glGenTextures();
    gl.glBindTexture(GL.GL_TEXTURE_2D, texture);

    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR);
    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR);

    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_S, GL.GL_REPEAT);
    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_T, GL.GL_REPEAT);

    gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGB8, lwjgl32.getWidth(), lwjgl32.getHeight(), 0, GL.GL_RGB, GL.GL_UNSIGNED_BYTE, lwjgl32.getImageData());
    gl.glGenerateMipmap(GL.GL_TEXTURE_2D);

    prog.use(gl);
    gl.glUniform1i(gl.glGetUniformLocation(prog.getId(gl), "texture"), 0);

    Util.checkError(gl, false);
}

@Override
protected void update(GL gl, long deltaFrame) {
    gl.glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
    gl.glClear(GL.GL_COLOR_BUFFER_BIT);

    gl.glActiveTexture(GL.GL_TEXTURE0);
    gl.glBindTexture(GL.GL_TEXTURE_2D, texture);

    prog.use(gl);
    gl.glBindVertexArray(vao);
    gl.glDrawElements(GL.GL_TRIANGLES, 6, GL.GL_UNSIGNED_INT, 0);
}

@Override
public void clean(GL gl) {
    gl.glDeleteVertexArrays(vao);
    gl.glDeleteBuffers(vbo);
    gl.glDeleteBuffers(ebo);

    ShaderProgram.clearUse(gl);
    prog.dispose(gl);
}

Vertex shader

#version 330 core

layout (location = 0) in vec3 in_position;
layout (location = 1) in vec3 in_color;
layout (location = 2) in vec2 in_texCoord;

out vec3 color;
out vec2 texCoord;

void main() {
    gl_Position = vec4(in_position, 1.0);

    color = in_color;
    texCoord = vec2(in_texCoord.x, in_texCoord.y);
}

Fragment shader

#version 330 core

out vec4 frag_colour;

in vec3 color;
in vec2 texCoord;

uniform sampler2D texture;

void main() {
    frag_colour = texture(texture, texCoord) * vec4(color, 1.0);
}

1I wrapped LWJGL3's GL## static classes into a single interface and implementation so I can have a bunch of stateful methods that do things such as identifying the context that is being rendered to, etc. I also did my best to remove non-core functionality from the interface so I don't even get tempted to use deprecated stuff


Answer:

You only enable the vertex attribute with index 0, but this 3 times.

Adapt your code like this:

gl.glVertexAttribPointer(0, 3, GL.GL_FLOAT, false, 8 * Float.BYTES, 0);
gl.glEnableVertexAttribArray(0);

gl.glVertexAttribPointer(1, 3, GL.GL_FLOAT, false, 8 * Float.BYTES, 3 * Float.BYTES);
gl.glEnableVertexAttribArray(1); // <-------

gl.glVertexAttribPointer(2, 2, GL.GL_FLOAT, false, 8 * Float.BYTES, 6 * Float.BYTES);
gl.glEnableVertexAttribArray(2); // <------

Question:

I have a very simple lwjgl program:

package com.github.fabioticconi;

import org.lwjgl.glfw.GLFWErrorCallback;
import org.lwjgl.opengl.GL;
import org.lwjgl.system.MemoryUtil;

import java.nio.FloatBuffer;

import static org.lwjgl.glfw.GLFW.*;
import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.opengl.GL15.*;
import static org.lwjgl.opengl.GL20.*;
import static org.lwjgl.opengl.GL30.glBindVertexArray;
import static org.lwjgl.opengl.GL30.glGenVertexArrays;
import static org.lwjgl.system.MemoryUtil.NULL;

/**
 * Author: Fabio Ticconi
 * Date: 10/03/18
 */
public class Main
{
    public static void main(final String[] args)
    {
        // Setup an error callback. The default implementation
        // will print the error message in System.err.
        GLFWErrorCallback.createPrint(System.err).set();

        // Initialise GLFW
        if (!glfwInit())
            throw new IllegalStateException("Unable to initialize GLFW");

        glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);

        // IF I ENABLE THE BELOW, I DON'T SEE THE TRIANGLE!
        // glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
        // glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
        // glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
        // glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);

        // Open a window and create its OpenGL context
        final long window = glfwCreateWindow(1024, 768, "Test", NULL, NULL);

        if (window == NULL)
            throw new RuntimeException("Failed to create the GLFW window");

        glfwMakeContextCurrent(window);

        // This line is critical for LWJGL's interoperation with GLFW's
        // OpenGL context, or any context that is managed externally.
        // LWJGL detects the context that is current in the current thread,
        // creates the GLCapabilities instance and makes the OpenGL
        // bindings available for use.
        GL.createCapabilities();

        // Ensure we can capture the escape key being pressed below
        glfwSetInputMode(window, GLFW_STICKY_KEYS, GL_TRUE);

        final float[] vertices = new float[] { 0.0f, 0.5f, 0.0f, -0.5f, -0.5f, 0.0f, 0.5f, -0.5f, 0.0f };

        int         vaoId;
        final int   vboId;
        FloatBuffer verticesBuffer = null;
        try
        {
            verticesBuffer = MemoryUtil.memAllocFloat(vertices.length);
            verticesBuffer.put(vertices).flip();

            // Create the VAO and bind to it
            vaoId = glGenVertexArrays();
            glBindVertexArray(vaoId);

            // Create the VBO and bint to it
            vboId = glGenBuffers();
            glBindBuffer(GL_ARRAY_BUFFER, vboId);
            glBufferData(GL_ARRAY_BUFFER, verticesBuffer, GL_STATIC_DRAW);
            // Define structure of the data
            glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0);

            // Unbind the VBO
            glBindBuffer(GL_ARRAY_BUFFER, 0);

            // Unbind the VAO
            glBindVertexArray(0);
        } finally
        {
            if (verticesBuffer != null)
            {
                MemoryUtil.memFree(verticesBuffer);
            }
        }

        do
        {
            // Swap buffers
            glfwSwapBuffers(window);
            glfwPollEvents();

            glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

            // Bind to the VAO
            glBindVertexArray(vaoId);
            glEnableVertexAttribArray(0);

            // Draw the vertices
            glDrawArrays(GL_TRIANGLES, 0, 3);

            // Restore state
            glDisableVertexAttribArray(0);
            glBindVertexArray(0);

        } // Check if the ESC key was pressed or the window was closed
        while (glfwGetKey(window, GLFW_KEY_ESCAPE) != GLFW_PRESS && !glfwWindowShouldClose(window));
    }
}

When I run it, I see a white triangle. Which is what I expected to see.

If, however, I uncomment this lines:

// glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
// glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
// glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
// glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);

I don't see the triangle, just the black window. Experimenting a bit, it looks like whenever I set a context version of 3 or more, unless I use GLFW_OPENGL_COMPAT_PROFILE, it won't display the triangle.

I'm on Linux, with nvidia proprietary drivers apparently supporting up to OpenGL 4.5:

$ glxinfo | grep -i open
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GTX 970M/PCIe/SSE2
OpenGL core profile version string: 4.5.0 NVIDIA 384.111
OpenGL core profile shading language version string: 4.50 NVIDIA
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 4.5.0 NVIDIA 384.111
OpenGL shading language version string: 4.50 NVIDIA
OpenGL context flags: (none)
OpenGL profile mask: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 384.111
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
OpenGL ES profile extensions:

so I don't really get why lwjgl seems to silently fail with OpenGL 3 or 4 compatibility.

Those lines are apparently needed to support MacOS (from all tutorials of OpenGL and lwjgl I've come across), and in general I'd like to know what's going on.

Any clues?


Answer:

In a OpenGL core profile it is mandatory to use your own shaders. There is no fallback that would be used if you don't supply one.

I'm also not convinced that this really silently fails since you are never querying glGetError(). Most probably you get an error from the glDrawArrays line.

Question:

I've a trouble with moving my entities in a OpenGL context: when I try to place an entity, the position seems correct, but when the entity starts to move, everything is going wrong, and collisions don't work. I'm new to OpenGL, and I suspect my world matrix or model matrix to be wrong.

Here's the code of the vertex shader:

#version 330 core

layout (location=0) in vec3 position;

out vec3 extColor;

uniform mat4 projectionMatrix;
uniform mat4 modelMatrix;
uniform vec3 inColor;
void main()
{
    gl_Position = projectionMatrix * modelMatrix  * vec4(position, 1.0);
    extColor = inColor;
}

Here is the class that computes most of the Matrix:

public class Transformations {
    private Matrix4f projectionMatrix;
    private  Matrix4f modelMatrix;

    public Transformations() {
        projectionMatrix = new Matrix4f();
        modelMatrix = new Matrix4f();
    }

    public final Matrix4f getOrthoMatrix(float width, float height, float zNear, float zFar) {
        projectionMatrix.identity();
        projectionMatrix.ortho(0.0f, width, 0.0f, height, zNear, zFar);
        return projectionMatrix;
    }

    public Matrix4f getModelMatrix(Vector3f offset, float angleZ, float scale) {
        modelMatrix.identity().translation(offset).rotate(angleZ, 0, 0, 0).scale(scale);
        return modelMatrix;
    }
}

Here's the test for collisions:

 public boolean isIn(Pos p) {
        return (p.getX() >= this.pos.getX() &&
                p.getX() <= this.pos.getX() + DIMENSION)
                && (p.getY() >= this.pos.getY() &&
                p.getY() <= this.pos.getY() + DIMENSION);
}

Also, there's a link to the github project: https://github.com/ShiroUsagi-san/opengl-engine.

I'm really new to OpenGL 3 so I could have done some really big mistakes.

I'm also running i3 as WM, I don't really know if this could lead to this kind of issues.


Answer:

I fixes the issues after thinking about how openGL and VBO work: Indeed, I was setting a new reference for each entity, so I had to change the line

Mesh fourmiMesh = MeshBuilder.buildRect(this.position.getX(), this.position.getY(), 10, 10);

to

Mesh fourmiMesh = MeshBuilder.buildRect(0, 0, 10, 10);

It was a confusion that I made between the positions of the vertex in a VBO and the positions in my world. Hope that misunderstood helps people to understand.

Question:

I'm new to OpenGL, and am trying to draw two squares with different textures. I'm using lwjgl 3 as the interface to OpenGL, but I believe the OpenGL calls should look familiar for people who use OpenGL in other languages. My main loop looks like this:

    while (glfwWindowShouldClose(windowId) == GLFW_FALSE) {
        glClear(GL_COLOR_BUFFER_BIT);

        glUseProgram(shaderProgramId);
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);

        // DRAW TEXTURE 1
        specifyVertexAttributes(shaderProgramId);
        glBindTexture(GL_TEXTURE_2D, texture1.getId());
        glBindBuffer(GL_ARRAY_BUFFER, vbo1);
        glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);

        // DRAW TEXTURE 2
        specifyVertexAttributes(shaderProgramId);
        glBindTexture(GL_TEXTURE_2D, texture2.getId());
        glBindBuffer(GL_ARRAY_BUFFER, vbo2);
        glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);

        glfwSwapBuffers(windowId);

        glfwPollEvents();
    }

When I comment out the code that draws texture 2, texture 1 draws in the correct place:

When I comment out the code that draws texture 1, texture 2 draws in the correct place:

But when I try to draw both of the textures, they switch places:

I realize the code snippet above is probably not sufficient to diagnose this issue. I've created a standalone java class that contains all the OpenGL calls that I'm making in order to draw these textures: StandaloneMultiTextureExample. The repo which contains that file also builds with gradle. It should be really easy for anyone who is willing to help to check out the repo and run that example class.

Edit: Copy of StandaloneMultiTextureExample.java (without imports)

public class StandaloneMultiTextureExample {

    private final GLFWErrorCallback errorCallback = new LoggingErrorCallback();
    private final GLFWKeyCallback keyCallback = new ApplicationClosingKeyCallback();

    public void run() {
        if ( glfwInit() != GLFW_TRUE ) {
            throw new IllegalStateException("Unable to initialize GLFW");
        }

        glfwSetErrorCallback(errorCallback);
        int width = 225;
        int height = 200;
        long windowId = createWindow(width, height);
        glfwSetKeyCallback(windowId, keyCallback);

        glfwShowWindow(windowId);
        GL.createCapabilities();

        Texture texture1 = createTexture("multiTextureExample/texture1.png");
        Texture texture2 = createTexture("multiTextureExample/texture2.png");

        int shaderProgramId = createShaderProgram(
                "multiTextureExample/textureShader.vert",
                "multiTextureExample/textureShader.frag");

        IntBuffer elements = BufferUtils.createIntBuffer(2 * 3);
        elements.put(0).put(1).put(2);
        elements.put(2).put(3).put(0);
        elements.flip();

        int ebo = glGenBuffers();
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
        glBufferData(GL_ELEMENT_ARRAY_BUFFER, elements, GL_STATIC_DRAW);

        float x1 = 0;
        float y1 = 0;
        float x2 = x1 + texture1.getWidth();
        float y2 = y1 + texture1.getHeight();
        float x3 = 25;
        float x4 = x3 + texture2.getWidth();

        FloatBuffer texture1Vertices = BufferUtils.createFloatBuffer(4 * 7);
        texture1Vertices.put(x1).put(y1).put(1).put(1).put(1).put(0).put(0);
        texture1Vertices.put(x2).put(y1).put(1).put(1).put(1).put(1).put(0);
        texture1Vertices.put(x2).put(y2).put(1).put(1).put(1).put(1).put(1);
        texture1Vertices.put(x1).put(y2).put(1).put(1).put(1).put(0).put(1);
        texture1Vertices.flip();

        FloatBuffer texture2Vertices = BufferUtils.createFloatBuffer(4 * 7);
        texture2Vertices.put(x3).put(y1).put(1).put(1).put(1).put(0).put(0);
        texture2Vertices.put(x4).put(y1).put(1).put(1).put(1).put(1).put(0);
        texture2Vertices.put(x4).put(y2).put(1).put(1).put(1).put(1).put(1);
        texture2Vertices.put(x3).put(y2).put(1).put(1).put(1).put(0).put(1);
        texture2Vertices.flip();

        int vbo1 = glGenBuffers();
        glBindBuffer(GL_ARRAY_BUFFER, vbo1);
        glBufferData(GL_ARRAY_BUFFER, texture1Vertices, GL_STATIC_DRAW);

        int vbo2 = glGenBuffers();
        glBindBuffer(GL_ARRAY_BUFFER, vbo2);
        glBufferData(GL_ARRAY_BUFFER, texture2Vertices, GL_STATIC_DRAW);

        specifyUniformVariables(windowId, shaderProgramId);

        glClearColor(0.5f, 0.5f, 0.5f, 1.0f);

        while (glfwWindowShouldClose(windowId) == GLFW_FALSE) {
            glClear(GL_COLOR_BUFFER_BIT);

            glUseProgram(shaderProgramId);
            glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);

            specifyVertexAttributes(shaderProgramId);
            glBindTexture(GL_TEXTURE_2D, texture1.getId());
            glBindBuffer(GL_ARRAY_BUFFER, vbo1);
            glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);

            specifyVertexAttributes(shaderProgramId);
            glBindTexture(GL_TEXTURE_2D, texture2.getId());
            glBindBuffer(GL_ARRAY_BUFFER, vbo2);
            glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);

            glfwSwapBuffers(windowId);

            glfwPollEvents();
        }
    }

    private void specifyUniformVariables(long windowId, int shaderProgramId) {
        int uniModel = getUniform(shaderProgramId, "model");
        FloatBuffer model = BufferUtils.createFloatBuffer(16);
        new Matrix4f().get(model);
        glUniformMatrix4fv(uniModel, false, model);

        FloatBuffer view = BufferUtils.createFloatBuffer(16);
        new Matrix4f().get(view);
        int uniView = getUniform(shaderProgramId, "view");
        glUniformMatrix4fv(uniView, false, view);

        WindowSize windowSize = getWindowSize(windowId);
        int uniProjection = getUniform(shaderProgramId, "projection");
        FloatBuffer projection = BufferUtils.createFloatBuffer(16);
        new Matrix4f().ortho2D(0, windowSize.getWidth(), 0, windowSize.getHeight()).get(projection);
        glUniformMatrix4fv(uniProjection, false, projection);
    }

    private void specifyVertexAttributes(int shaderProgramId) {
        int stride = 7 * Float.BYTES;

        int posAttrib = getAttribute(shaderProgramId, "position");
        glEnableVertexAttribArray(posAttrib);
        glVertexAttribPointer(posAttrib, 2, GL_FLOAT, false, stride, 0);

        int colAttrib = getAttribute(shaderProgramId, "color");
        glEnableVertexAttribArray(colAttrib);
        glVertexAttribPointer(colAttrib, 3, GL_FLOAT, false, stride, 2 * Float.BYTES);

        int texAttrib = getAttribute(shaderProgramId, "texcoord");
        glEnableVertexAttribArray(texAttrib);
        glVertexAttribPointer(texAttrib, 2, GL_FLOAT, false, stride, 5 * Float.BYTES);
    }

    public long createWindow(int width, int height) {
        glfwDefaultWindowHints();
        glfwWindowHint(GLFW_VISIBLE, GLFW_FALSE);
        glfwWindowHint(GLFW_RESIZABLE, GLFW_TRUE);

        glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
        glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 0);

        long windowId = glfwCreateWindow(width, height, "Hello World!", NULL, NULL);
        if (windowId == NULL) {
            throw new RuntimeException("Failed to create the GLFW window");
        }

        // Get the resolution of the primary monitor
        GLFWVidMode vidmode = glfwGetVideoMode(glfwGetPrimaryMonitor());
        // Center our window
        glfwSetWindowPos(
            windowId,
            (vidmode.width() - width) / 2,
            (vidmode.height() - height) / 2
        );

        // Make the OpenGL context current
        glfwMakeContextCurrent(windowId);
        // Enable v-sync
        glfwSwapInterval(1);

        return windowId;
    }

    public WindowSize getWindowSize(long windowId) {
        IntBuffer width = BufferUtils.createIntBuffer(1);
        IntBuffer height = BufferUtils.createIntBuffer(1);
        GLFW.glfwGetFramebufferSize(windowId, width, height);
        return new WindowSize(width.get(), height.get());
    }

    public Texture createTexture(String textureResource) {
        int textureId = glGenTextures();
        glBindTexture(GL_TEXTURE_2D, textureId);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

        ARGBImage image = new ImageService().loadClasspathImage(textureResource);
        glTexImage2D(
                GL_TEXTURE_2D,
                0,
                GL_RGBA8,
                image.getWidth(),
                image.getHeight(),
                0,
                GL_RGBA,
                GL_UNSIGNED_BYTE,
                image.getContents());

        return new Texture(textureId, image.getWidth(), image.getHeight());
    }

    public int createShaderProgram(String vertexResource, String fragmentResource) {
        int vertexShader = glCreateShader(GL_VERTEX_SHADER);
        String vertexSource = getClasspathResource(vertexResource);
        glShaderSource(vertexShader, vertexSource);
        glCompileShader(vertexShader);
        validateShaderCompilation(vertexShader);

        int fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
        String fragmentSource = getClasspathResource(fragmentResource);
        glShaderSource(fragmentShader, fragmentSource);
        glCompileShader(fragmentShader);
        validateShaderCompilation(fragmentShader);

        int shaderProgramId = glCreateProgram();
        glAttachShader(shaderProgramId, vertexShader);
        glAttachShader(shaderProgramId, fragmentShader);
        glLinkProgram(shaderProgramId);
        validateShaderProgram(shaderProgramId);
        glUseProgram(shaderProgramId);

        return shaderProgramId;
    }

    private static String getClasspathResource(String resourceName) {
        URL url = Resources.getResource(resourceName);
        try {
            return Resources.toString(url, Charsets.UTF_8);
        } catch (IOException e) {
            throw Throwables.propagate(e);
        }
    }

    private static void validateShaderCompilation(int shader) {
        int status = glGetShaderi(shader, GL_COMPILE_STATUS);
        if (status != GL_TRUE) {
            throw new RuntimeException(glGetShaderInfoLog(shader));
        }
    }

    private static void validateShaderProgram(int shaderProgram) {
        int status = glGetProgrami(shaderProgram, GL_LINK_STATUS);
        if (status != GL_TRUE) {
            throw new RuntimeException(glGetProgramInfoLog(shaderProgram));
        }
    }

    public int getUniform(int shaderProgramId, String uniformName) {
        int location = glGetUniformLocation(shaderProgramId, uniformName);
        if (location == -1) {
            throw new IllegalArgumentException("Could not find uniform: "
                    + uniformName + " for shaderProgramId: " + shaderProgramId);
        } else {
            return location;
        }
    }

    public int getAttribute(int shaderProgramId, String attribute) {
        int location = glGetAttribLocation(shaderProgramId, attribute);
        if (location == -1) {
            throw new IllegalArgumentException("Could not find attribute: "
                    + attribute + " for shaderProgramId: " + shaderProgramId);
        } else {
            return location;
        }
    }

    public static void main(String[] args) {
        new StandaloneMultiTextureExample().run();
    }

}

Answer:

You're setting up your vertex arrays incorrectly, by binding the vertex buffer at the wrong time.

When you're using Vertex Array Objects (VAO) to render [1], the only time that the GL_ARRAY_BUFFER binding is read is the glVertexAttribPointer call. Calling glVertexAttribPointer takes the name of the currently bound GL_ARRAY_BUFFER object and associates it with the attribute and VAO; after that, the GL_ARRAY_BUFFER binding doesn't matter at all, and binding another array buffer will not modify the VAO in any way.

In your code, you call specifyVertexAttributes to set up your VAO before you call glBindBuffer. This means that the array buffer that glVertexAttribPointer saves is the previous one used. In your first two examples, where you only ever bind one array buffer at any point of time, it "works" because the bound buffer from the previous frame persists and is read in the next frame; if you paused your program on the very first frame, it will likely be black.

The solution in your case is simple; move the glBindBuffer call above the specifyVertexAttributes call, so that your glVertexAttribPointer calls read the proper buffer.

Note that this does not apply with the GL_ELEMENT_ARRAY_BUFFER binding; the binding is saved in the VAO whenever you bind a new one.

[1] You're technically using the default VAO, which is only supported in a compatibility context, though its easy to create and bind a global VAO that's used all the time.