Hot questions for Using Lightweight Java Game Library in libgdx

Question:

After updating NVidia Drivers to 378.49 on EVGA GTX 1080 FTW I started getting this exception using libGDX.

Exception in thread "LWJGL Application" com.badlogic.gdx.utils.GdxRuntimeException: OpenGL is not supported by the video driver.
    at com.badlogic.gdx.backends.lwjgl.LwjglGraphics.createDisplayPixelFormat(LwjglGraphics.java:229)
    at com.badlogic.gdx.backends.lwjgl.LwjglGraphics.setupDisplay(LwjglGraphics.java:174)
    at com.badlogic.gdx.backends.lwjgl.LwjglApplication.mainLoop(LwjglApplication.java:138)
    at com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run(LwjglApplication.java:120)
Caused by: org.lwjgl.LWJGLException: Pixel format not accelerated
    at org.lwjgl.opengl.WindowsPeerInfo.nChoosePixelFormat(Native Method)
    at org.lwjgl.opengl.WindowsPeerInfo.choosePixelFormat(WindowsPeerInfo.java:52)
    at org.lwjgl.opengl.WindowsDisplay.createWindow(WindowsDisplay.java:253)
    at org.lwjgl.opengl.Display.createWindow(Display.java:306)
    at org.lwjgl.opengl.Display.create(Display.java:848)
    at org.lwjgl.opengl.Display.create(Display.java:757)
    at com.badlogic.gdx.backends.lwjgl.LwjglGraphics.createDisplayPixelFormat(LwjglGraphics.java:220)
    ... 3 more

OpenGL Extensions Viewer shows that OpenGL version 4.5 available on my GPU.

I've tried forcing jrm executables to run on my NVidia GPU (they actually were running on it, but I just wanted to ensure)

Other OpenGL-based apps run fine. Also, I've tried to run a compiled libGDX game from Steam and it seems to run just fine.

I've tried to use different JRMs with different Java versions. I performed a clean driver reinstall and rebooted several times.

The exception appears in both Android Studio and IntelliJ.

config.allowSoftwareMode = true; doesn't work (and shouldn't). Windows only supports software rendering for OpenGL 1.1 when libGTX requires 2.0.


Answer:

I have the same issue.

Exception in thread "LWJGL Application" com.badlogic.gdx.utils.GdxRuntimeException: OpenGL is not supported by the video driver. 
I just downloaded and installed the previous driver from Nvidia 376.33 and it solved my issue (windows 10 64bit).

Question:

I have an undecorated window that needs to be centered, using this configuration:

Lwjgl3ApplicationConfiguration configuration = Lwjgl3ApplicationConfiguration()
configuration.setIdleFPS(60)
configuration.setBackBufferConfig(8,8,8,8,16,0,0)
configuration.setWindowedMode(1920,1080)
configuration.setTitle("Title")
configuration.setDecorated(false)
configuration.setResizable(false)

Later, in app, through options you can change the size of the window with presets defined from a specific aspect ratio. The resizing is made with this call:

Gdx.graphics.setWindowedMode(width, height)

This seems to keep the window in its original top left corner position (which can be at a random position on screen), but I want it to be centered on the monitor, or a way to move the window to any desired position at will.

The question: How can I keep the window created by LibGDX with LWJGL3Application centered when changing window size with SetWindowedMode()


Answer:

@Tenfour04 stated in response to the old answer below that you can get the LWJGL3Window instance with

Lwjgl3Window window = ((Lwjgl3Graphics)Gdx.graphics).getWindow();

You can then use that to set the position during a resize event for example

window.setWindowPos(x, y)
Old answer:

I solved this by reflection

public void setWindowSize(int width, int height) {
  Lwjgl3Application app = (Lwjgl3Application) Gdx.app
  Field windowfield = app.class.getDeclaredField("currentWindow")
  if(windowfield.trySetAccessible()) {
    Lwjgl3Window window = windowfield.get(app)
    Gdx.graphics.setWindowedMode(width, height)
    // Can use context size because of no decorations on window
    window.setWindowPos(Gdx.graphics.width/2 - width/2, Gdx.graphics.height/2 - height/2)
  }
}

Warning: Even though this works, this is not a good solution. The field of the class is kept private for a reason and not exposing it to the API means that it can change at any update, leaving you with a mess.

That being said, I'm posting this solution for people as desperate as me and because I'm not sure there's another proper solution yet. I will eagerly await a better solution though.

Question:

Is there any possibility to use libGDX (version does not matter) without OpenGL SE? I would like to target Desktop PC only and work with OpenGL 2.0. (Or higher, if possible) LibGDX does support OpenGL 2.0 till 4.5, but it only offers to use lwjgl calls to do so. I would like to use the whole libGdx without any custom lwjgl calls for OpenGL 2.0


Answer:

It is very unclear what you exactly want to do. But to answer your question: you don't have to do anything. LibGDX already uses OpenGL on desktop. Only on Android and iOS it uses OpenGL ES and for HTML it uses WebGL.

Ofcourse, because libgdx is targeting all those platforms, its classes are implemented using only the functionality available in both OpenGL and OpenGL ES. If you only want to target desktop then you can access the OpenGL methods directly in your desktop project. You don't need a core project or any other project than desktop in that case. How you can access those depends on the backend you are using (lwjgl, lwjgl3 or jglfw), consult the manual of those frameworks for more information about that. For example, you could directly call org.lwjgl.opengl.GL12.glTexSubImage3D or any other opengl method you need like that.

The config of your launcher defines which version of OpenGL is used. If you want to specify that, then you need to set config.useGL30 to true and can then specify the exact version using config.gles30ContextMajorVersion and config.gles30ContextMinorVersion. Which defaults to version 3.2. If you don't set useGL30 to true then no specific context will be requested, which means that the driver will be more forgiving and is comparable with (not equal to) around OpenGL ES 2.

Question:

Using LibGDX, I've written a very simple (and my first) fragment shader which is to have two different textures set, the first being an image to draw to the screen and the second is an alpha-transparency mask. Here is the fragment shader:

#ifdef GL_ES
    precision mediump float;
#endif

varying vec4 v_color;
varying vec2 v_texCoords;
uniform sampler2D u_texture;
uniform sampler2D u_mask;

void main() {
    vec4 texColor = texture2D(u_texture, v_texCoords);
    vec4 maskColor = texture2D(u_mask, v_texCoords);
    gl_FragColor = vec4(
        vec3(v_color * texColor),
        maskColor.a
    );
}

u_texture is the image to draw and u_mask is the texture with the transparency information.

However, what I really want to do is utilise the Sprite and TextureAtlas classes to refer to a couple of TextureRegion instances for the shader. Here is my rendering code:

shaderBatch.setProjectionMatrix(camera.combined);
shaderBatch.begin();

// ... some housekeeping code ...

Sprite overlapAlphaMask = maskTextureAtlas.createSprite("mask", 4);
Sprite overlapSprite = spriteTextureAtlas.createSprite("some-tile", 8);
// Or some other index in the texture atlas

overlapAlphaMask.getTexture().bind(1);
alphaMaskShader.setUniformi("u_mask", 1);

overlapSprite.getTexture().bind(0);
alphaMaskShader.setUniformi("u_texture", 0);

shaderBatch.draw(overlapSprite, worldX, worldY, 1.0f, 1.0f);

Although this is at least running and rendering something, it is picking up the wrong texture region from the maskTextureAtlas. My guess is there's more to do here as the shader is not going to have any knowledge about the overlapAlphaMask sprite - how to draw it, what the texture coords are, etc.

I'm assuming the SpriteBatch.draw() method is taking care of picking up the correct information from the overlapSprite that's passed in, so I expect the vec2 v_texCoords in the shader has been set correctly to draw this, but these co-ordinates are wrong for the 2nd texture / sampler2D uniform property. This is my first attempt at using shaders so I'm sure I'm missing something basic!

--- Update ---

So far my googling has revealed I may need to be setting something more via the vertex shader. I'm using this (default?) libGDX vertex shader:

attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;

uniform mat4 u_projTrans;

varying vec4 v_color;
varying vec2 v_texCoords;

void main() {
    v_color = a_color;
    v_texCoords = a_texCoord0;
    gl_Position = u_projTrans * a_position;
}

Answer:

As Dietrich Epp has pointed out, what I require is to send the extra texture co-ordinates via vertex buffers through to the vertex and pixel shader. I've managed to achieve this by writing my own implementation of the SpriteBatch class which has two textures set at a time (not TextureRegions) and batches drawing of these with a couple of slightly extended shaders, as follows:

Vertex Shader

attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;
attribute vec2 a_texCoord1;

uniform mat4 u_projTrans;

varying vec4 v_color;
varying vec2 v_texCoords0;
varying vec2 v_texCoords1;

void main() {
    v_color = a_color;
    v_texCoords0 = a_texCoord0;
    v_texCoords1 = a_texCoord1;
    gl_Position = u_projTrans * a_position;
}

This is the default vertex shader but with the attribute vec2 a_texCoord replaced with a set of two tex co-ords, and resulting varying vec2 v_texCoordsX for passing along to the fragment shader. This extra attribute vec2 is then sent in through the vertex buffer.

Fragment Shader

#ifdef GL_ES
    precision mediump float;
#endif

varying vec4 v_color;
varying vec2 v_texCoords0;
varying vec2 v_texCoords1;
uniform sampler2D u_texture0;
uniform sampler2D u_texture1;

void main() {
    vec4 texColor = texture2D(u_texture0, v_texCoords0);
    vec4 maskColor = texture2D(u_texture1, v_texCoords1);
    gl_FragColor = vec4(
        vec3(v_color * texColor),
        maskColor.a
    );
}

Similarly, the fragment shader now receives two sampler2D textures to sample from, and a pair of texture co-ordinates to refer to for each.

The key to the changes in the SpriteBatch class is extending the Mesh definition in the constructor:

    mesh = new Mesh(Mesh.VertexDataType.VertexArray, false, size * 4, size * 6, new VertexAttribute(VertexAttributes.Usage.Position, 2,
            ShaderProgram.POSITION_ATTRIBUTE), new VertexAttribute(VertexAttributes.Usage.ColorPacked, 4, ShaderProgram.COLOR_ATTRIBUTE),
            new VertexAttribute(VertexAttributes.Usage.TextureCoordinates, 2, ShaderProgram.TEXCOORD_ATTRIBUTE + "0"),
            new VertexAttribute(VertexAttributes.Usage.TextureCoordinates, 2, ShaderProgram.TEXCOORD_ATTRIBUTE + "1"));

The final line above is the new addition, a pair of co-ordinates for a_texCoord1. The draw() method is passed two Sprite or TextureRegion instances, and is extended slightly to pass in the 2 extra floats for each set of vertices per point/index to the vertex array. Finally a slight addition to the shader set up to bind both textures and set the uniforms as so:

    shader.setUniformi("u_texture0", 0);
    shader.setUniformi("u_texture1", 1);

And it works! My temporary solution was to draw the two sprites to the screen over the top of each other writing transparency information into the display buffer, but due to SpriteBatch batching for each texture, and my images being split across several textures, this was resulting in an unacceptable loss of performance, which this solution has fixed :)

Question:

I am in the process of implementing a lens glow effect for my engine.

However, attempting to use an occlusion query only returns true when the fragments in question are completely occluded.

Perhaps the problem lies in that I am manually writing to the z-value of each vertex, since I am using a logarithmic depth buffer. However, I am not sure why this would affect occlusion testing.

Here are the relevant code snippets:

public class Query implements Disposable{
    private final int id;
    private final int type;

    private boolean inUse = false;

    public Query(int type){
        this.type = type;
        int[] arr = new int[1];
        Gdx.gl30.glGenQueries(1,arr,0);
        id = arr[0];
    }

    public void start(){
        Gdx.gl30.glBeginQuery(type, id);
        inUse = true;
    }

    public void end(){
        Gdx.gl30.glEndQuery(type);
    }

    public boolean isResultReady(){
        IntBuffer result = BufferUtils.newIntBuffer(1);
        Gdx.gl30.glGetQueryObjectuiv(id,Gdx.gl30.GL_QUERY_RESULT_AVAILABLE, result);
        return result.get(0) == Gdx.gl.GL_TRUE;
    }

    public int getResult(){
        inUse = false;
        IntBuffer result = BufferUtils.newIntBuffer(1);
        Gdx.gl30.glGetQueryObjectuiv(id, Gdx.gl30.GL_QUERY_RESULT, result);
        return result.get(0);
    }

    public boolean isInUse(){
        return inUse;
    }

    @Override
    public void dispose() {
        Gdx.gl30.glDeleteQueries(1, new int[]{id},0);
    }
}

Here is the method where I do the actual test:

private void doOcclusionTest(Camera cam){
        if(query.isResultReady()){
            int visibleSamples = query.getResult();
            System.out.println(visibleSamples);
        }


        temp4.set(cam.getPosition());
        temp4.sub(position);
        temp4.normalize();
        temp4.mul(getSize()*10);
        temp4.add(position);
        occlusionTestPoint.setPosition(temp4.x,temp4.y,temp4.z);


        if(!query.isInUse()) {
            query.start();
            Gdx.gl.glEnable(Gdx.gl.GL_DEPTH_TEST);
            occlusionTestPoint.render(renderer.getPointShader(), cam);
            query.end();
        }
    }

My vertex shader for a point, with logarithmic depth buffer calculations included:

#version 330 core
layout (location = 0) in vec3 aPos;

uniform mat4 modelView;
uniform mat4 projection;
uniform float og_farPlaneDistance;
uniform float u_logarithmicDepthConstant;

vec4 modelToClipCoordinates(vec4 position, mat4 modelViewPerspectiveMatrix, float depthConstant, float farPlaneDistance){
    vec4 clip = modelViewPerspectiveMatrix * position;

    clip.z = ((2.0 * log(depthConstant * clip.z + 1.0) / log(depthConstant * farPlaneDistance + 1.0)) - 1.0) * clip.w;
    return clip;
}

void main()
{
    gl_Position = modelToClipCoordinates(vec4(aPos, 1.0), projection * modelView, u_logarithmicDepthConstant, og_farPlaneDistance);
}

Fragment shader for a point:

#version 330 core

uniform vec4 color;

void main() {
    gl_FragColor = color;
}

Since I am just testing occlusion for a single point I know that the alternative would be to simply check the depth value of that pixel after everything is rendered. However, I am unsure of how I would calculate the logarithmic z-value of a point on the CPU.


Answer:

I have found a solution to my problem. It is a workaround, only plausible for single points, not for entire models, but here it goes:

Firstly, you must calculate the z-value of your point and the pixel coordinate where it lies. Calculating the z-value should be straight-forward, however in my case I was using a logarithmic depth buffer. For this reason, I had to make a few extra calculations for the z-value.

Here is my method to get the coordinates in Normalized Device Coordinate, including z-value(temp4f can be any Vector4f):

public Vector4f worldSpaceToDeviceCoords(Vector4f pos){
    temp4f.set(pos);
    Matrix4f projection = transformation.getProjectionMatrix(FOV, screenWidth,screenHeight,1f,MAXVIEWDISTANCE);
    Matrix4f view = transformation.getViewMatrix(camera);
    view.transform(temp4f); //Multiply the point vector by the view matrix
    projection.transform(temp4f); //Multiply the point vector by the projection matrix


    temp4f.x = ((temp4f.x / temp4f.w) + 1) / 2f; //Convert x coordinate to range between 0 to 1
    temp4f.y = ((temp4f.y / temp4f.w) + 1) / 2f; //Convert y coordinate to range between 0 to 1

    //Logarithmic depth buffer z-value calculation (Get rid of this if not using a logarithmic depth buffer)
    temp4f.z = ((2.0f * (float)Math.log(LOGDEPTHCONSTANT * temp4f.z + 1.0f) /
            (float)Math.log(LOGDEPTHCONSTANT * MAXVIEWDISTANCE + 1.0f)) - 1.0f) * temp4f.w;

    temp4f.z /= temp4f.w; //Perform perspective division on the z-value
    temp4f.z = (temp4f.z + 1)/2f; //Transform z coordinate into range 0 to 1

    return temp4f;
}

And this other method is used to get the coordinates of the pixel on the screen(temp2 is any Vector2f):

    public Vector2f projectPoint(Vector3f position){
    temp4f.set(worldSpaceToDeviceCoords(temp4f.set(position.x,position.y,position.z, 1)));
    temp4f.x*=screenWidth;
    temp4f.y*=screenHeight;

    //If the point is not visible, return null
    if (temp4f.w < 0){
        return null;
    }


    return temp2f.set(temp4f.x,temp4f.y);

}

Finally, a method to get the stored depth value at a given pixel(outBuff is any direct FloatBuffer):

public float getFramebufferDepthComponent(int x, int y){
    Gdx.gl.glReadPixels(x,y,1,1,Gdx.gl.GL_DEPTH_COMPONENT,Gdx.gl.GL_FLOAT,outBuff);
    return outBuff.get(0);
}

So with these methods, what you need to do to find out if a certain point is occluded is this:

  1. Check at what pixel the point lies(second method)
  2. Retrieve the current stored z-value at that pixel(third method)
  3. Get the calculated z-value of the point(first method)
  4. If the calculated z-value is lower than the stored z-value, then the point is visible

Please note that you should draw everything in the scene before sampling the depth buffer, otherwise the extracted depth buffer value will not reflect all that is rendered.