i've been wondering if it is possible to create a pixel depth offset effect in blender, like it is works in ue5, blender can do parallax occlusion, i just want to learn how pixel depth offset would work in blender.
went back to this one to try to resolve a texture issue but ended up beeing more simple then that. every attribute needs to incrise by 3. VAO.LinkAtribb((void*)0) VAO.LinkAtribb((void*)3) VAO.LinkAtribb((void*)6) VAO.LinkAtribb((void*)9) i thought it would be by 2 since the texcoords are only 2 values not 3 but when i changed it to 9 instead of 8 it worked perfect again.
Shouldn't the normal be multiplied by the model also in (default) vertex shader? Otherwise if we rotate the object, normals wont be rotated so i guess wrong diffuse and specular will be calculated?
So I suppose that when you draw to depth buffer and there's no fragment to draw in a location, that the Z value for that location in the depth buffer still changes to some sort of max depth value? Because otherwise it would be the clear color? Is this correct?
The depth buffer doesn't hold color values, the clear color only affects the color buffer. If a fragment is discarded, no buffers are written to; there is no default output. glClear(GL_DEPTH_BUFFER_BIT) resets each location in the depth buffer to max depth basically.
@@Kobold666 Thanks! That makes sense. Maybe you could help me understand one other thing too. At 4:40 he divides by the W element in the frag shader. Wouldn't it be possible to do that in the vertex shader instead? When I tried pre-dividing gl_Position by it's W, and then make W 1.0, I've gotten the same results on screen. Not sure though if that would work or not when it's not gl_Position though, like doing this to fragPosLight variable in this vid. OpenGL throws me off all the time with it's obscure automatic operations that are hard to understand because the data can't be printed.
@@undeadpresident You're right. Because it's a directional light, its vector is the same for each fragment, so you can do the calculation in the vertex shader. But... if you're planning to extend your lighting model with additional (non-directional) lights, normal maps, bump maps or whatever, you already have the per-pixel code in place. Don't optimize too early. Anyway, your assumption is correct. The main problem with OpenGL is its states. You bind some object and then call some functions to manipulate that object. The API would be much clearer if every function would take the object as a parameter instead of having to make some object the "active" one that all calls relate to.
@@Kobold666 Yeah I was having a lot of trouble with the state machine business too. I changed all my code to the DSA functions as soon as I learned about it and it helped a lot. Still have to bind some things though but it made things clearer. I am bewildered at why people making more recent tutorials don't use the DSA functions all the time. It should be the standard way. I've had all sorts of other issues with OpenGL too though. Currently I'm having an issue trying to learn to render shadows and the whole thing is appearing to be in shadow and don't know what the problem is since GLSL doesn't let you print data and it's not clear on exactly what calcs it's doing in the shader automatically. There's so many things that might be wrong and so few ways to check it makes me want to break things. I don't have too much trouble making complex programs & functions doing ordinary programming, but when it comes to OpenGL there's always some curve ball that fucks me up badly costing me a lot of time and frustration.
Yesterday I had a problem of the triangles being all black, so I went to sleep and decided that imma implement the error checking. After implementing it I was looking for ways of repair since the fragment compilation was acting up. instead I decided to inspect the default.frag and found out that the problem was in *drumroll* that I had out vec4 FragColor; twice in the file (for some reason). Anyways very good video.
I got everything working, but I noticed that if I put the light close to the center of one of the faces, the center of that face will be brighter than the rest of the points. This makes sense, and is very realistic, but how is it happening? The calculation for the light intensity depends on crntPos, but crntPos doesn't represent the would-be world position of the fragment being processed, but rather we only ever update it during the vertex shader to represent vertices of the face. Shouldn't light be interpolated between the different vertices as a result?
*IMPORTANT! Be sure to enable the glfwSwapInterval(1); function, because if you do not enable this function, your application will consume a lot of GPU resources up to 90% in the Task Manager.*
Dude, you don't know how to make a "tutorial". Your speed should be at least such that people are able to follow the video. BROO I can't even see what's happening on the screen. SLOW DOWN!!!
Thank you so much. I’ve been trying to add glfw to my game engine for ages now. I’ve made OpenGL projects before and I’ve never struggled that much. I don’t know why that happened but I even went to discord servers to stack overflow, and I was so desperate I went to ChatGPT. Thanks you, I wish I found this tutorial sooner.
Btw for poeple like me who are looking for a linux/wsl solution for this you just need to put the imgui folder from this guys repo(he has already pasted the required files from og imgui and imgui-sfml) into your root and add an include for it inside your makefile.
Hi, first of all, this is awesome. Thank you so much for this series. C++ related question: Can each of the Delete functions for the VBO, VAO, and EBO classes be replaced with a destructor? Is there a better reason to have those Delete functions or is it just preference?
I think he did it in EBO constructor in EBO.cpp EBO::EBO(GLuint* indices, GLsizeiptr size) { glGenBuffers(1, &ID); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ID); glBufferData(GL_ELEMENT_ARRAY_BUFFER, size, indices, GL_DYNAMIC_DRAW); }
trans, rot , scale are all abstracted to mesh class while model class only 2 parameters in its Draw() function , its not possible to transform from main.cpp file 😪
If I run glfwSwapBuffers(window); 0 times it's white If I run glfwSwapBuffers(window); 1 times it's black Without running glClear() or glColorClear, I expected it to turn back white if I run it once more because it it supposed to alternate between front and back buffer and by default the front buffer was white. But for some reason If I run glfwSwapBuffers(window); 2 times it's black
if someone has problems with loading the texture like me, change the line "unsigned char* bytes = stbi_load(image, &widthImg, &heightImg, &numColCh, 0);" last value to "STBI_rgb_alpha". it worked for me
Nice 👍 i was looking for something like this. May I ask if you have any idea how to remove the main window of imgui. I only found a way to do it in dx9. Thanks.
Again, "(float)(width / height)" should be replaced with "width / float(height)" and don't forget to add "#define GLM_ENABLE_EXPERIMENTAL" at the top line of the header file "glm/gtx/transform.hpp"
I am getting an error, exception unhandled when comipling the vertex shader. I tried copying the exact same code but getting the same error. any solutions?
A while back in math class I was doing simultaneous equations and got that 27 equals to 9(I don't know what was in my mind ). This is how I feel like after watching this video. I have tried reading the learn opengl website and I think I am understanding a small portion of what is happening.