Hot questions for Using Lightweight Java Game Library in windows

Question:

I'm using Java 8 and LWJGL to make a game engine with GLFW and OpenGL. I have a generic IndexedVAO class with all my VAO code in it to simplify things. Here are the relevant parts:

Constructor

    GL30.glBindVertexArray(vertexArrayObject);
    GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, bufferObject);
    GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, indexBufferObject);
    for(VertexAttribPointer prr : format.parts) {
        GL20.glEnableVertexAttribArray(prr.index);
        GL20.glVertexAttribPointer(prr.index, prr.size, prr.type,
            prr.normalized, prr.stride, prr.ptr);
    }
    GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0);
    GL30.glBindVertexArray(0);

Upload Function

    data.flip();
    index.flip();
    this.numberOfIndicies = index.limit() / 2;
    GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, bufferObject);
    GL15.glBufferData(GL15.GL_ARRAY_BUFFER, data, bufferUse);
    GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0);
    GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, indexBufferObject);
    GL15.glBufferData(GL15.GL_ELEMENT_ARRAY_BUFFER, index, bufferUse);
    GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, 0);

Draw Function

    GL30.glBindVertexArray(vertexArrayObject);
    GL11.glDrawElements(this.drawmode, this.numberOfIndicies, GL11.GL_UNSIGNED_SHORT, 0L);
    GL30.glBindVertexArray(0);

The code works fine on linux, but today I tried it on a windows machine and got an EXCEPTION_ACCESS_VIOLATION jvm crash. When I checked the hs_err_pid#### file the JVM generates when it crashes I figured out the error was caused by a call to glDrawElements. It was the first glDrawElements call in the whole application, and commenting it out just moved the exception to the next one. I spent my whole afternoon moving code around and doing research and I got nowhere. It's not shader-related, glDrawArrays works in it's place, and considering it works fine on linux means it has nothing to do with any of the vertex generation code because it's all the same java code.

One major hardware difference between the two machines is that the windows machine has an older radeon graphics card and the linux machine has a recent geforce card in it, both have the latest drivers. I booted linux on the radeon machine to see if it was an inconsistency between vendors but when I finished waiting 30 minutes for java to install everything worked fine, which means this is OS specific. To verify I had my friend test it on his windows 10 machine and he also got the EXCEPTION_ACCESS_VIOLATION.

TL; DR: the above code works on linux but on windows it causes an EXCEPTION_ACCESS_VIOLATION jvm crash


Answer:

Thanks @derhass and @Spektre, the problem was that AMD's windows drivers couldn't properly handle vertex components that weren't aligned on 4-byte boundaries, so using bytes to store normals or rgb colors crashes the driver because the component was only three bytes long. Weird though, how it would work on linux even though it's the same card reading the vertex data.

Question:

I have been working on a simple LWJGL practice project for a bit and just recently stumbled into a new issue. Now with every project i due i save working backups in multiple places. I was running a Windows 8.1 OS and recently upgraded to Windows 10 and suddenly all of my backups and current program have the rendering error as seen below. I've tried to run my program on a windows 8.1 OS and it works fine there.

Is there some new windows driver update that could be causing this rendering issue?

Or is there some incompatibility with windows 10 and LWJGL i havn't been able to find out about?


Answer:

Try this: before you render any objects into your world, make sure you are using the inbuilt depth testing, this can be done by using this line of code:

GL11.glEnable(GL11.GL_DEPTH_TEST);

This should work for you, as it is forgotten in most cases that this happens!

-Kore

EDIT: Also dont forget to clear the buffer aswell!! GL11.glClear(GL11.GL_COLOR_BUFFER_BIT|GL11.GL_DEPTH_BUFFER_BIT);

Question:

My intend is to Benchmark the difference between CPU and GPU. The problem is, I am only able to retreive my GPUs, my CPU is not showing up.

This program produces correct output on OSX. There, the CPU as well as the GPUs are listed:

public static void displayInfo() {

    for (int platformIndex = 0; platformIndex < CLPlatform.getPlatforms().size(); platformIndex++) {
        CLPlatform platform = CLPlatform.getPlatforms().get(platformIndex);

        System.out.println("Platform #" + platformIndex + ":" + platform.getInfoString(CL_PLATFORM_NAME));

        List<CLDevice> devices = platform.getDevices(CL_DEVICE_TYPE_ALL);
        for (int deviceIndex = 0; deviceIndex < devices.size(); deviceIndex++) {
            CLDevice device = devices.get(deviceIndex);
            System.out.printf(Locale.ENGLISH, "Device #%d(%s):%s\n",
                    deviceIndex,
                    UtilCL.getDeviceType(device.getInfoInt(CL_DEVICE_TYPE)),
                    device.getInfoString(CL_DEVICE_NAME));
        }
    }
}

My output:

Platform #0:NVIDIA CUDA

Device #0(GPU):GeForce GTX 560 Ti

Device #1(GPU):GeForce GTX 560 Ti

My PC:

  • Intel i5-2500K
  • Windows 8.1 pro (x64)
  • 2x GeForce GTX 560 Ti

I am using lwjgl version 2.8.4

Why am I not able to retreive my CPU?


Answer:

(re-posting answer from comment)

OS X is somewhat unique in the fact that it comes with OpenCL CPU drivers pre-installed, so OpenCL works 'out-of-the-box'. On Windows and Linux, you need to install an OpenCL runtime/driver for your CPU. For Intel CPUs, you can find them here:

https://software.intel.com/en-us/articles/opencl-drivers

Question:

This is the part of the source code where the problem resides:

GL.createCapabilities();            
// Define the viewport dimensions
glViewport(0, 0, 300, 300);

int shaderProgram;
final String vertexShader = "#version 330 core\n in vec3 position; // The position variable has attribute position 0\n out vec4 vertexColor; // Specify a color output to the fragment shader\n void main()\n {\n gl_Position = vec4(position, 1.0); // See how we directly give a vec3 to vec4's constructor\n vertexColor = vec4(0.5f, 0.0f, 0.0f, 1.0f); // Set the output variable to a dark-red color\n }";

String fragmentShader = "#version 330 core\n in vec3 ourColor;\n"
                + "in vec2 TexCoord;\n"
                + "out vec4 color;\n"
                + "uniform sampler2D ourTexture1;\n"
                + "void main()\n"
                + "{\n"
                + "color = vec4(ourColor, 1.0f);\n"
                + "}";

int vertex, fragment;
// Vertex Shader
vertex = GL20.glCreateShader(GL20.GL_VERTEX_SHADER);
GL20.glShaderSource(vertex, vertexShader);
GL20.glCompileShader(vertex);
// Fragment Shader
fragment = GL20.glCreateShader(GL20.GL_FRAGMENT_SHADER);
GL20.glShaderSource(fragment, fragmentShader);
GL20.glCompileShader(fragment);

//create program and bind shaders to program
shaderProgram = GL20.glCreateProgram();
GL20.glAttachShader(shaderProgram, vertex);
GL20.glAttachShader(shaderProgram, fragment);
GL20.glLinkProgram(shaderProgram);
int vlength = GL20.GL_SHADER_SOURCE_LENGTH;
int iscompiled = GL20.glGetProgrami(fragment, GL20.GL_COMPILE_STATUS);
if(iscompiled == GL_FALSE)
{
     System.out.println(glGetString(GL_VERSION));
     System.out.println("not compiled");
     System.out.println(GL11.glGetString(GL20.GL_SHADING_LANGUAGE_VERSION));
     System.out.println(GL20.glGetShaderInfoLog(vertex));
     return;
}
int isLinked = GL20.glGetProgrami(shaderProgram, GL20.GL_LINK_STATUS);
if(isLinked == GL_FALSE)
{
    System.out.println("failed linking");
    return;
}

This is the vertex shader:

 #version 330 core
 layout (location = 0) in vec3 position; // The position variable has attribute position 0

 out vec4 vertexColor; // Specify a color output to the fragment shader

 void main()
 {
     gl_Position = vec4(position, 1.0); // See how we directly give a vec3 to vec4's constructor
     vertexColor = vec4(0.5f, 0.0f, 0.0f, 1.0f); // Set the output variable to a dark-red color
 }

This is the fragment shader:

 #version 330 core
 in vec4 vertexColor; // The input variable from the vertex shader (same name and same type)

 out vec4 color;

 void main()
 {
     color = vertexColor;
 } 

Both of the shaders don't compile. Niether the vertex or fragment shader. What needs to be fixed? I am using the core profile of opengl version 3.3. The operating system being run is windows vista home premium 64 bit.


Answer:

int iscompiled = GL20.glGetProgrami(fragment, GL20.GL_COMPILE_STATUS);

You cannot query a program for compilation status. That is a property of each shader object seperately, and can be queried via glGetShaderiv(). Using GL_COMPILE_STATUS for glGetProgramiv() will just result in an GL_INVALID_ENUM error.

Question:

I want to make Java app. using CEF3 library. CEF is the library to embedded Google Chrome browser in any app. And LWJGL is used to write GL code in Java. But before using CEF, basic question is how to mix C++ and Java.

  1. Java main calls C++ part as DLL

  2. C++ part creates window and set up GL context

  3. In the message loop, C++ call backs Java part again to do some GL work in Java.

Following test code fails with message:

FATAL ERROR in native method: Thread[main,5,main]: No context is current or a function that is not available in the current context was called. The JVM will abort execution. at org.lwjgl.opengl.GL11.glColor3f(Native Method) at Main.run(Main.java:18) at Main.cppmain(Native Method) at Main.main(Main.java:10)

Probably because Java part does not know about the GL context created by C++ part. My question is how can I set up GL context so that both C++ and Java may call GL functions?


Main.java

import org.lwjgl.opengl.GL11;

public class Main implements Runnable {
    {
        System.loadLibrary("cppgl");
    }

    public static void main(String[] args) {
        Main me = new Main();
        me.cppmain(me);
    }

    private native void cppmain(Runnable callback);

    @Override
    public void run() {
        // callback from cpp
        GL11.glColor3f(1.0f, 0.0f, 1.0f);
    }
}

cppgl.cpp

#pragma comment(lib, "OpenGL32.lib")

#include <windows.h>
#include <tchar.h>
#include <functional>
#include <gl/gl.h>
#include <jni.h>
#pragma comment(lib, "jvm.lib")

LRESULT CALLBACK WndProc(HWND hWnd, UINT mes, WPARAM wParam, LPARAM lParam) {
    if (mes == WM_DESTROY || mes == WM_CLOSE) { PostQuitMessage(0); return 0; }
    return DefWindowProc(hWnd, mes, wParam, lParam);
}

extern "C" JNIEXPORT void JNICALL Java_Main_cppmain
(JNIEnv *env, jobject, jobject callback) {

    LPCTSTR title = L"gl test";

    // prepare JNI call
    jclass cls = env->FindClass("Ljava/lang/Runnable;");
    jmethodID mid = env->GetMethodID(cls, "run", "()V");

    // window class
    HINSTANCE hInstance = ::GetModuleHandle(NULL);
    WNDCLASSEX wcex;
    ZeroMemory(&wcex, sizeof(wcex));
    wcex.cbSize = sizeof(wcex);
    wcex.style = CS_OWNDC | CS_HREDRAW | CS_VREDRAW;
    wcex.lpfnWndProc = WndProc;
    wcex.hInstance = hInstance;
    wcex.hbrBackground = (HBRUSH)(COLOR_WINDOW + 1);
    wcex.lpszClassName = title;
    if (!RegisterClassEx(&wcex)) return;

    // window
    RECT R = { 0, 0, 640, 480 };
    AdjustWindowRect(&R, WS_OVERLAPPEDWINDOW, FALSE);
    HWND hWnd = CreateWindow(title, title, WS_OVERLAPPEDWINDOW,
        CW_USEDEFAULT, 0, R.right - R.left, R.bottom - R.top,
        NULL, NULL, hInstance, NULL);
    if (!hWnd) return;

    // GL context
    PIXELFORMATDESCRIPTOR pfd;
    ZeroMemory(&pfd, sizeof(pfd));
    pfd.nSize = sizeof(pfd);
    pfd.nVersion = 1;
    pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
    pfd.iPixelType = PFD_TYPE_RGBA;
    pfd.cColorBits = 32;
    pfd.cDepthBits = 24;
    pfd.iLayerType = PFD_MAIN_PLANE;
    HDC dc = GetDC(hWnd);
    int format = ChoosePixelFormat(dc, &pfd);
    SetPixelFormat(dc, format, &pfd);
    HGLRC glRC = wglCreateContext(dc);
    if (!wglMakeCurrent(dc, glRC)) return;

    // message loop
    ShowWindow(hWnd, SW_SHOW);
    MSG msg;
    do {
        if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
            TranslateMessage(&msg);
            DispatchMessage(&msg);
        }
        else {
            wglMakeCurrent(dc, glRC);
            glClearColor(0.0f, 0.5f, 1.0f, 1.0f);

            // call Java and change clear color here
            env->CallVoidMethod(callback, mid);

            glClear(GL_COLOR_BUFFER_BIT);
            glRectf(-0.5f, -0.5f, 0.5f, 0.5f);
            glFlush();
            SwapBuffers(dc);
            wglMakeCurrent(NULL, NULL);
        }
    } while (msg.message != WM_QUIT);

    // clean up
    wglMakeCurrent(NULL, NULL);
    wglDeleteContext(glRC);
    ReleaseDC(hWnd, dc);
}


Answer:

You sould call this once at begin before doing any rendering from java.

    // This line is critical for LWJGL's interoperation with GLFW's
    // OpenGL context, or any context that is managed externally.
    // LWJGL detects the context that is current in the current thread,
    // creates the GLCapabilities instance and makes the OpenGL
    // bindings available for use.
    GL.createCapabilities();

Question:

If I were to make a videogame with LWJGL and Windows natives, could this be exported and run correctly on a Mac? If not, how can it be made to? I have gone through many websites, hoping to find an answer, but I can't tell if my problem is too specific or the question just has too many words. Thanks in advance!


Answer:

While Java code is platform independent, natives only work for the platform you compile them for. This is the reason why there are different natives for Windows, Mac and Linux - otherwise you would only need a single type of natives. Natives compiled for Windows are generally not runnable on Mac.

In order to save storage, you can just put the native files of the platform you want the program to run on in the natives folder. However, if you want the program to run on any platform out of the box, just put all the natives for all platforms into the natives folder - LWJGL will automatically choose the correct ones at run time.