Hot questions for Using Lightweight Java Game Library in textures

Question:

I'm using LWJGL and trying to draw a texture, its render code is the following:

public static void main(String[] args) {
    GLFWErrorCallback.createPrint(System.err).set();
    if (!GLFW.glfwInit()) {
        throw new IllegalStateException("Unable to initialize GLFW");
    }
    GLFW.glfwWindowHint(GLFW.GLFW_VISIBLE, GLFW.GLFW_FALSE);
    GLFW.glfwWindowHint(GLFW.GLFW_RESIZABLE, GLFW.GLFW_TRUE);
    window = GLFW.glfwCreateWindow(1280, 720, "Test", 0, 0);
    GLFW.glfwMakeContextCurrent(window);
    GL.createCapabilities();
    GL11.glMatrixMode(GL11.GL_PROJECTION);
    GL11.glLoadIdentity();
    GL11.glOrtho(0, 1280, 0, 720, 1, -1);
    GL11.glMatrixMode(GL11.GL_MODELVIEW);
    GL11.glViewport(0, 0, 1920, 1200);
    GL11.glClearColor(1.0F, 1.0F, 1.0F, 1.0F);
    int x = 0, y = 0;
    ByteBuffer imageBuffer = readFile(filename);
    IntBuffer w = BufferUtils.createIntBuffer(1);
    IntBuffer h = BufferUtils.createIntBuffer(1);
    IntBuffer comp = BufferUtils.createIntBuffer(1);
    ByteBuffer image = STBImage.stbi_load_from_memory(imageBuffer, w, h, comp, 0);  
    int textureId = GL11.glGenTextures();
    int glTarget = GL11.GL_TEXTURE_2D;
    GL11.glBindTexture(glTarget, textureId);
    glTexParameteri(glTarget, GL11.GL_TEXTURE_WRAP_S, GL12.GL_CLAMP_TO_EDGE);
    glTexParameteri(glTarget, GL11.GL_TEXTURE_WRAP_T, GL12.GL_CLAMP_TO_EDGE);
    glTexParameteri(glTarget, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_NEAREST);
    glTexParameteri(glTarget, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_NEAREST);
    int width = w.get(0);
    int height = h.get(0);
    /** Send texel data to OpenGL if texture is 2d texture */
    if (glTarget == GL11.GL_TEXTURE_2D) {
        if(comp.get(0) == 3){
            GL11.glTexImage2D(glTarget, 0, GL11.GL_RGB, width, height, 0, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, image);
        }
        else{
            GL11.glTexImage2D(glTarget, 0, GL11.GL_RGBA, width, height, 0, GL11.GL_RGBA, GL11.GL_UNSIGNED_BYTE, image);
            GL11.glEnable(GL11.GL_BLEND);
            GL11.glBlendFunc(GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA);
        }
    }
    while (Sgl.window.isAlive()) {
        GLFW.glfwPollEvents();
        GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
        /* Texture display part */
        bind();
        GL11.glEnable(glTarget);
        GL11.glBegin(GL11.GL_QUADS);
        GL11.glTexCoord2f(0,0);
        GL11.glVertex2f(x,y);
        GL11.glTexCoord2f(1,0);
        GL11.glVertex2f(x+width,y);
        GL11.glTexCoord2f(1,1);
        GL11.glVertex2f(x+width,y+height);
        GL11.glTexCoord2f(0,1);
        GL11.glVertex2f(x,y+height);
        GL11.glEnd();
        GL11.glDisable(glTarget);
        /*End texture display part*/
        GLFW.glfwSwapBuffers(window);
    }
}

The problem is that the window is 1280x720 big and the image only 392x69 but it is displayed like this:

So, it's upside down, much bigger than expected and at the wrong position.

What am I doing wrong?

Edit: I remove some if clauses due to size of the code.


Answer:

Going through your issues one by one

1. So, it's upside down,

OpenGL's texture coordinates (not only GLs, this is generally true for all common render APIs) is defined in a way that the origin will b at the very first pixel you specify when uploading the data. Your image is most likely defined and loaded with the convention left-to-right, top-to-bottom in mind - so the vertex where you will assign the (0,0) texcoords will show the upper right corner of your image.

Now, GL's window space is defined (by default at least) with mathematical conventions, origin is at bottom left. And you're seeting up some proejction matrix:

GL11.glOrtho(0, 1280, 0, 720, 1, -1);

This will map x=0 to x_ndc=-1 and x=1280 to x_ndc=1, and y=0 to y_ndc=-1 and y=720 to y_ndc=1. (It will map the z coordinate just flipped relative to the range [1,-1] you specified, so z=-1 to z_ndc=-1 and z=1 to z_ndc=1, but that is irrelevant here.)

NDC are the normalized device coordinates, where (-1,-1) is the bottom left corner of your viewport, and (1,1) the top-right corner, respectively.

So when you now do:

  GL11.glVertex2f(x,y); // NOTE: x and y are just 0 here
  GL11.glTexCoord2f(1,0);

the above transformations will be applied, and the vertex with the top-left texel will end up at the bottom-left corner of your viewport.

2. much bigger than expected

You set up your viewport transformation as follows:

GL11.glViewport(0, 0, 1920, 1200);

Now, this just definesanother transformation, now from NDC to window space. Just that x_ndc=-1 is mapped to x_win=0 x_ndc=1 to x_win=1920 and for y respectively.

So, the input coordiante (0,0) is mapped to (-1,1) in NDC, and further to (0,0) in window space, which is still the bottom left corner. (392,93) will be mapped to ~(0.3,0.129) in NDC, and (588,115) in window space - which is way bigger than your image actually is.

It should still fit inside your window, but from the screenshot, it looks like it doesn't. There may be several explanations:

  • your code isn't exactly reproducing the issue, and the

    I remove some if clauses due to size of the code.

    might or might not have anything to do with that.

  • you are using some "high DPI scaling" (as microsoft calls it) or similar features of your operating system. The size of the window you specify in GLFW is not in pixels, but in some system- and platform-specific unit. GLFW provides means to query the actual pixel sizes, which you should use for your setting an appropriate viewport for the window.
  • ...

3. and at the wrong position

That's also a result of your mis-match of OpenGL coordinate conventions.

4. What am I doing wrong?

The code you have written uses deprecated OpenGL. You are relying on the fixed function pipeline which has been deprecated by a decade ago by now. Those functions were completely removed from modern core profiles of OpenGL. And even before that, immediate mode rendering by glBegin()/glEnd() was basically superseeded by vertex arrays back 20 years ago. If you are learning OpenGL right now, you should really try to avoid learning that old stuff, and start with a clean core profile OpenGL context.

Question:

I'm currently working on a game with java & lwjgl and in my main menu I try to draw a background image. For some reason, whatever texture loading and drawing technique i use, the image gets screwed up completely.

This is what happens:

And this is what it's supposed to look like:

This is my code for loading the texture:

  private int loadTexture(String imgName) {
    try {
        BufferedImage img = ImageIO.read(JarStreamLoader.load(imgName));
        ByteBuffer buffer = BufferUtils.createByteBuffer(img.getWidth() * img.getHeight() * 3);
        for (int x = 0; x < img.getWidth(); x++) {
            for (int y = 0; y < img.getHeight(); y++) {
                Color color = new Color(img.getRGB(x, y));
                buffer.put((byte) color.getRed());
                buffer.put((byte) color.getGreen());
                buffer.put((byte) color.getBlue());
            }
        }
        buffer.flip();
        int textureId = glGenTextures();
        glBindTexture(GL_TEXTURE_2D, textureId);
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, img.getWidth(), img.getHeight(), 0, GL_RGB, GL_UNSIGNED_BYTE, buffer);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        return textureId;
    } catch (Exception e) {
        throw new RuntimeException(e);
    }
}

And that's my rendering code:

public static void drawRect(int x, int y, int width, int height, Color color) {
    glColor4f(color.getRed() / 255, color.getGreen() / 255, color.getBlue() / 255, 1.0F);
    glBegin(GL_QUADS);

    glTexCoord2f(0.0f, 0.0f);
    glVertex2d(x, y);

    glTexCoord2f(0.0f, 1.0F);
    glVertex2d(x, y + height);

    glTexCoord2f(1.0F, 1.0F);
    glVertex2d(x + width, y + height);

    glTexCoord2f(1.0F, 0.0f);
    glVertex2d(x + width, y);

    glEnd();
}

Any Ideas?


Answer:

You're adding the pixels in the wrong order. You need to do it in this order:

for (int y = 0; y < img.getHeight(); y++)
    for (int x = 0; x < img.getWidth(); x++)

Note that OpenGL's origin is at the bottom left corner, so you might have to flip the image on the y-axis as well:

Color color = new Color(img.getRGB(x, img.getHeight() - y - 1));

Question:

I am making a game engine, and in there I have a class which loads OBJ models. The class itself works perfectly, however, the issue I am getting is that when I render any model with textures I will always get the error (1282) Invalid Operation. I have tried different things in the code, and i have found out that it is specifically the texture() call in the fragment shader that causes this issue. I have a custom class to move textures into texture units based on which units are open, here is that class:

public class GLTextureHandler{
    private static ConcurrentHashMap<Integer,Integer> texRef=new ConcurrentHashMap<Integer,Integer>();
    public static final int texUnits=GL11.glGetInteger(GL20.GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS);
    private static Integer[] inUse=new Integer[texUnits];
    static{
        for(int i=0;i<inUse.length;i++){
            inUse[i]=0;
        }
        inUse[0]=1;
    }
    public static void registerTex(int tex){
        texRef.put(tex,-1);
    }
    public static int useTex(int tex){
        if(!texRef.containsKey(tex))
            registerTex(tex);
        int slot=texRef.get(tex);
        if(slot!=-1)
            return slot;
        int cnt=0;
        for(int u:inUse){
            System.out.println("Checking CNT ("+cnt+"), u is "+u);
            if(u==0){
                glActiveTexture(GL_TEXTURE0+cnt);
                glBindTexture(GL_TEXTURE_2D,tex);
                inUse[u]=1;
                texRef.put(tex,cnt);
                System.out.println("putting in slot "+cnt);
                return cnt;
            }
            cnt++;
        }
        glActiveTexture(GL_TEXTURE0+texUnits-1);
        glBindTexture(GL_TEXTURE_2D,tex);
        inUse[texUnits-1]=1;
        texRef.put(tex,texUnits-1);
        return texUnits-1;
    }
    public static void openSlot(int tex){
        if(!texRef.containsKey(tex))
            return;
        int slot=texRef.get(tex);
        if(slot!=-1)
            inUse[slot]=0;
    }
    public static boolean hasTex(int tex){
        return texRef.containsKey(tex);
    }
}

The class puts textures into a slot when useTex() is called, and returns which slot it was put in. I call this inside my DetailedVAO class, which simply renders the VAO after updating the uniforms for the materials (modelview matrix is handled inside the model class). It also tells the shader which texture unit the texture is in, and as far as I know it rcarrectly binds the texture. The The detailedVAO class is this:

class DetailedVAO{
    private Material mtl;
    private int vao,ksloc,kaloc,kdloc,texLoc,shinyLoc;
    private int texRef;

    public DetailedVAO(int vao,Material mtl,int ksloc,int kaloc,int kdloc,int texLoc,int shinyLoc){
        this.vao=vao;
        this.mtl=mtl;
        this.kaloc=kaloc;
        this.kdloc=kdloc;
        this.ksloc=ksloc;
        this.texLoc=texLoc;this.shinyLoc=shinyLoc;
        texRef=(mtl.tex()==null?-1:mtl.tex().getTextureID());
        GLTextureHandler.registerTex(texRef);
    }
    public void render(){
        Vec3 Ks=(mtl.getKs()==null?new Vec3(1):mtl.getKs());
        Vec3 Ka=(mtl.getKa()==null?new Vec3(.5f):mtl.getKa());
        Vec3 Kd=(mtl.getKd()==null?new Vec3(1):mtl.getKd());

        GL20.glUniform3f(ksloc,Ks.x,Ks.y,Ks.z);
        GL20.glUniform3f(kaloc,Ka.x,Ka.y,Ka.z);
        GL20.glUniform3f(kdloc,Kd.x,Kd.y,Kd.z);
        GL20.glUniform1f(shinyLoc,mtl.getShiny());

        int aSlot=GLTextureHandler.useTex(texRef);
        GL20.glUniform1f(texLoc,aSlot);

        glBindVertexArray(vao);
        glDrawArrays(GL_TRIANGLES, 0, Model.fvaoSize/4);
    }
}

The Vertex Shader:

#version 330

in vec4 position;
in vec3 normals;
in vec2 texCoords;

uniform mat4 view;
uniform mat4 projection;
uniform mat4 model;
uniform mat4 normal;

uniform vec3 u_lightPosition;
uniform vec3 u_cameraPosition;

uniform vec3 Ks;
uniform vec3 Ka;
uniform vec3 Kd;

out vec3 o_normal;
out vec3 o_toLight;
out vec3 o_toCamera;
out vec2 o_texcoords;

void main()
{
    vec4 worldPosition=model*position;

    o_normal = normalize(mat3(normal) * normals);

   // direction to light
   o_toLight = normalize(u_lightPosition - worldPosition.xyz);

   // direction to camera
   o_toCamera = normalize(u_cameraPosition - worldPosition.xyz);

   // texture coordinates to fragment shader
   o_texcoords = texCoords;

    gl_Position=projection*view*worldPosition;
}

The Fragment Shader works if i just use the Blinn-Phong values:

#version 330
out vec4 outputColor;

uniform vec4 color;

uniform mat4 view;
uniform mat4 projection;
uniform mat4 model;

uniform vec3 u_lightAmbientIntensitys; // = vec3(0.6, 0.3, 0);
uniform vec3 u_lightDiffuseIntensitys; // = vec3(1, 0.5, 0);
uniform vec3 u_lightSpecularIntensitys; // = vec3(0, 1, 0);

// parameters of the material and possible values
uniform vec3 u_matAmbientReflectances; // = vec3(1, 1, 1);
uniform vec3 u_matDiffuseReflectances; // = vec3(1, 1, 1);
uniform vec3 u_matSpecularReflectances; // = vec3(1, 1, 1);
uniform float u_matShininess;

uniform sampler2D u_diffuseTexture;

uniform vec3 Ks;
uniform vec3 Ka;
uniform vec3 Kd;

in vec3 o_normal;
in vec3 o_toLight;
in vec3 o_toCamera;
in vec2 o_texcoords;

vec3 ambientLighting()
{
   return Ka * u_lightAmbientIntensitys;
}

// returns intensity of diffuse reflection
vec3 diffuseLighting(in vec3 N, in vec3 L)
{
   // calculation as for Lambertian reflection
   float diffuseTerm = clamp(dot(N, L), 0, 1) ;
   return Kd * u_lightDiffuseIntensitys * diffuseTerm;
}

// returns intensity of specular reflection
vec3 specularLighting(in vec3 N, in vec3 L, in vec3 V)
{
   float specularTerm = 0;

   // calculate specular reflection only if
   // the surface is oriented to the light source
   if(dot(N, L) > 0)
   {
      // half vector
      vec3 H = normalize(L + V);
      specularTerm = pow(dot(N, H), u_matShininess);
   }
   return Ks * u_lightSpecularIntensitys * specularTerm;
}

void main(void)
{
   // normalize vectors after interpolation
   vec3 L = normalize(o_toLight);
  vec3 V = normalize(o_toCamera);
  vec3 N = normalize(o_normal);

   // get Blinn-Phong reflectance components
   vec3 Iamb = ambientLighting();
   vec3 Idif = diffuseLighting(N, L);
   vec3 Ispe = specularLighting(N, L, V);

   // diffuse color of the object from texture
   vec3 diffuseColor = vec3(texture(u_diffuseTexture, o_texcoords.xy));
   // combination of all components and diffuse color of the object
   outputColor.xyz = diffuseColor*(Iamb+Ispe+Idif);
   outputColor.a = 1;
}

Answer:

  • This may be the source of your problems:

    GL20.glUniform1f(texLoc,aSlot);
    

    It should be

    GL20.glUniform1i(texLoc,aSlot); // i not f
    
  • EDIT: I did not read the code closely enough regarding setting glActiveTexture. The comment I made was false.

  • You're not unbinding your VAO:

    glBindVertexArray(vao);
    glDrawArrays(GL_TRIANGLES, 0, Model.fvaoSize/4);
    glBindVertexArray( 0 ); // <- Probably want this
    

Question:

The function method glTexImage2D takes 'level' as a paremeter which represents the Level Of Detail bias. However, A texture's LOD Bias can be set using glTexParameteri and the GL_TEXTURE_LOD_BIAS target. How do these do settings for LOD Bias interact? Are they the same, and whichever is set most recently is used, or do they have different meanings?


Answer:

The function method glTexImage2D takes 'level' as a paremeter which represents the Level Of Detail bias.

No, it does not. The level parameter specifies which mipmap level you are allocating an image for.

Question:

I'm using lwjgl's port of stb_image to load a jpg image. Problem is, I always get a null to the ByteBuffer because nothing gets loaded. Here's the code:

int[] width = new int[1], height = new int[1], nrChannels = new int[1];

ByteBuffer data = stbi_load("/textures/container.jpg",width, height,nrChannels,0);

if(data == null)
    throw new RuntimeException("Failed to load texture."); //I get this exception.

The location of my texture:

I of course tried it like so:

ByteBuffer data = stbi_load("container.jpg",width, height,nrChannels,0);

Same result, didn't load. What am I doing wrong?


Answer:

The path you give to stbi_load() is not meant to be a classpath resource but a file system path.

Question:

I'm currently learning some OpenGL and I want to have 2 different textures (diffuse and specular) applied to a cube. The problem is that only the first texture (of texture unit 0) is accessible; both samplers seem to use it, instead of using two different ones.

Even if i assign both sampler uniforms to 1, the first (diffuse texture) is used. I presume I must have overlooked something.

The relevant part of my init() looks like this:

texture = Texture.loadTextureFromFile("res/texture/stone_07_diffuse.jpg");
textureSpecular = Texture.loadTextureFromFile("res/texture/stone_07_specular.jpg");

shaderProgram = new ShaderProgram();
shaderProgram.attachVertexShader("res/shader/LightTest.vsh");
shaderProgram.attachFragmentShader("res/shader/LightTest.fsh");
shaderProgram.link()

ShaderProgram.bind(shaderProgram);
{
    shaderProgram.setUniform("materialDiffuseTexture", 0);
    shaderProgram.setUniform("materialSpecularTexture", 1);
    ...
}
ShaderProgram.unbind();

Texture.setActiveTextureUnit(0);
Texture.bind(texture);

Texture.setActiveTextureUnit(1);
Texture.bind(textureSpecular);

The relevant part of the render() method looks like the following:

ShaderProgram.bind(shaderProgram);
{
    //set other uniforms as projectionMatrix, viewMatrix, etc.
    ...

    glBindVertexArray(vaoID);
    glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_BYTE, 0);
    glBindVertexArray(0);
}
ShaderProgram.unbind();

texture, textureSpecular, shaderProgram and vaoID are global variables. The uniforms declared in the fragment shader:

uniform sampler2D materialDiffuseTexture;
uniform sampler2D materialSpecularTexture;

What do I do wrong?


Answer:

Oh my god, I feel so stupid right now. The problem was that I was using the setUniform(String, float...) method of my ShaderProgram class which called glUniform1f(int, float).

But since the sampler2D uniform holds an integer I have to use glUniform1i(int, int) to assign the texture unit. Now it works!

Question:

Alright, I couldn't find a good name for this, so I will explain in a bit further detail.

I am making a game using LWJGL and I have gotten some basic rendering done, but now I want to do something a bit more advanced.

Here is the situation:

I have a mesh (positions, normals, texture coords, indices) I generate which can currently support 1 texture, this would be great if I had a single image containing all of the textures, but sadly that isn't the case. I have a individual image for each texture which needs to be loaded individually.

Now, I see a way how I could do this, but it doesn't seem practical or like a good usage of memory. -Load all the textures into one image and save where each one is in that image for usage with the texture coords.

The textures should NOT blend together, hard coding anything is not an option as I wish to allow modding to be easy to implement, and anywhere from 1 (best case scenario) to 65,536+ textures (worst case scenario) are able to be used in the same "mesh".


Answer:

I am simply going to use a Texture Atlas as doing anything else seems impractical. Thanks @httpdigest for the suggestion.

Question:

I was searching for an anti-aliasing algorithm for my OpenGL program (so I searched for a good shader). The thing is, all shaders want to do something with the textures, but I dont use textures, only colors. I looked at FXAA most of the time, so is there a anti-aliasing algorithm that just works with colors? My game, what this is for looks blocky like minecraft, but only works with colors and cubes of different size.

I hope someone can help me.

Greetings


Answer:

Anti-aliasing has nothing specifically to do with either textures or colors.

Proper anti-aliasing is about sample rate, which while highly technical can be thought of as doing extra work to make a better educated guess at some value that cannot be directly looked up (e.g. a pixel that is only partially covered by a triangle).

Multisample Anti-Aliasing (MSAA) will work nicely for you, it will only anti-alias polygon edges and does nothing for texture aliasing on the interior of a polygon. Since you are not using textures you do not need to worry about aliasing inside a polygon.

Incidentally, FXAA is not proper anti-aliasing. FXAA is basically a shader-based edge detection and blur image processing filter. FXAA will blur any part of the scene with sharp edges, whether it is a polygon edge or an edge due to a mapped texture. It indiscriminately blurs anything it thinks is an aliased edge and gets this wrong often, resulting in blurry textures.


To use MSAA, you need:
  1. A framebuffer with at least 2 samples
  2. Enable multisample rasterization

Satisfying (1) is going to depend on what you used to create your window (in this case LWJGL). Most frameworks let you select the sample count as one of the parameters at the time of creation.

Framebuffer Objects can also be used to do this without messing with your window's parameters, but they are more complicated than need be for this discussion.

(2) is as simple as calling glEnable (GL_MULTISAMPLE).

Question:

I have a black and white texture(800*600 pixels) and I want to change the black pixels to black but quite transparent and change the white pixels to completely transparent.

I've tried using the obvious: take the FloatBuffer with texture data and running a for-loop. Like this for the black pixels:

FloatBuffer data; //The texture data (rgba)
float[] change = new float[]{0, 0, 0, 1}; //Current black color
float[] insert = new float[]{0, 0, 0, 0.5f}; //The new transparent black color

for(int i = 0; i < data.length; i+=4){
    if(data.get(i) == change[0] && data.get(i+1) == change[1] && data.get(i+2) == change[2] && data.get(i+3) == change[3]){
        data.put(i, insert[0]);
        data.put(i+1, insert[1]);
        data.put(i+2, insert[2]);
        data.put(i+3, insert[3]);
    }
}

This turned out to be very very slow, I looked around on the Internet and found this shaders thing. So my question is:

Should I use some sort of shaders code, are there some built in method in opengl/lwjgl or is this a thing I need to do on the cpu and in that case what is the best way?

Sorry for the horrible title and for some spelling problems, but I hope you understand.


Answer:

There are a few ways you can optimize your existing code to increase its speed:

You could greatly speed this up by not calling data.get() four times in your if statement but instead by getting all the pixel at once and checking that against your black color. This is the biggest bottle neck I see in your code.

Another way to speed it up would just be to ignore the Alpha color data if you can assume that all alpha data for the black pixels is set to 1 or something close to it.

A third way to optimize would be to check the RGB data as integers instead of floats.

Finally, I'm not sure how FloatBuffer and data.get()/put() works, but if you're opening and closing the file each time, that's going to be very slow. Read into memory once, make your changes, and then write the file out only once.

Hope this helps. Good luck!