Flying Around in 3D
By Kshitij Aucharmal • 4 minutes read •
Repo Link
Summary
Let me just show you the progress, then we can talk about what I did:
Cool right? Let walk through what improvements have been made since the dawn of the age of triangles. So I managed to get multiple triangles rendering last time, along with colors for vertices.
I had to remove the vertex colors, as it leads to very bad texture mapping later on, so that means removing the EBO
(Element Buffer Object) Entirely as well. Its only really useful in cases where you don’t want the object to have textures, or at least if the object can have procedural textures, which I haven’t gotten to yet
Textures
I can send the texture coordinates along with vertex data, so that it maps as I want it to. There are modes to how the texture repeats, given as follows:
OpenGL Option | Description |
---|---|
GL_REPEAT | The default behavior for textures. Repeats the texture image. |
GL_MIRRORED_REPEAT | Same as GL_REPEAT but mirrors the image with each repeat. |
GL_CLAMP_TO_EDGE | Clamps the coordinates between 0 and 1. The result is that higher coordinates become clamped to the edge, resulting in a stretched edge pattern. |
GL_CLAMP_TO_BORDER | Coordinates outside the range are now given a user-specified border color. |
I just used GL_MIRRORED_REPEAT Mode ’cause the tutorial uses that OpenGL also generated Mipmaps on its own using the glGenerateMipmap(GL_TEXTURE_2D)
function
Coordinate Systems
Anyone who knows the LearnOpenGL website will know I skipped over the transformations section. Nothing to say about it except installing the glm library, knew how vectors and matrices worked already.
This image proves very helpful in understanding the stages of transformations to show a 3D scene on a 2D screen:
Each have been explained in ample detail on the website, so you can check out what each step does. For an overview,
Space | Description |
---|---|
Local space | Local space is the coordinate space that is local to your object |
World space | space in which the objects can be defined to have a location in 3D space |
View space | Camera space, space scene from camera’s POV |
Clip space | Range of coordinates that will be displayed, if outside this, they will be removed |
Screen space | Perspective or Othographic, take your pick |
These can be represented / implemented using transformation matrices, and each transformation is applied seperately in order to form a 3D scene
Footnotes and Next Steps
At the end, I implemented the Camera and gave some input instructions to make it a flythrough style camera. Also tidied up the code, not exactly how I want it to be but will refactor it later
Now, the next steps are not gonna be following the tutorial, as it goes in detail about lighting and stuff, which is not totally essential right now. The things I need to implement are as follows:
- Modular system that allows for dynamic creation of objects through UI
- Fullscreen/Bigger window editor with imgui windows
- Implementing imguizmo and add gizmos to move/rotate/scale
- Implementing pybind11 to allow creation of objects through python
I wanna try the lighting stuff too, but this comes first. After this is done, I’ll try to implement a physics engine, maybe from scratch maybe taking a library, and then after the engine is pretty modular and scalable, I’ll try to implement a Ray Tracer in it as well.
So Stay tuned Guys !!!