Sunday, July 10, 2022

Primer: Graphics Pipeline

Overview
Modern OpenGL supports many different primitives  such as Points(GL_POINTS), Triangle(GL_TRIANGLE)  and Line Strip (GL_LINE_STRIP) etc. as shown below.


A scene consists of one or more 3D objects.  The shapes of these 3D objects are typically described using primitives.  For example, the 3D wire frame of a rabbit below is represented as hundreds of triangles.
These triangles are in turn are defined by their vertices. A vertex is the corner of the triangle where two edges meet, and thus every triangle is composed of three vertices. 

Note that this tutorial focuses only on 3D shapes rendered as  triangles.

Details
In OpenGL, the Graphics pipeline is responsible for rendering 3D objects. The following gives a brief overview without over burdening with the complex  details. As you get familiar, this can be revisited to gain a deeper understanding.

FrameBuffer
The output of graphics pipeline ends up in Framebuffer. Framebuffer is piece of memory within the graphics card that maps to the display. For simplicity, you can assume that it's like a bitmap covering entire viewport. Double framebuffers are used to avoid screen tearing while rendering the scene. For example, after the first frame of a scene is written by the pipeline into framebuffer, it's drawn on the screen while the framebuffer2 is filled with the second frame of the scene. Then it's swapped with the first so that  the screen now displays the second frame while third frame is written in to the framebuffer etc.

OpenGL  Graphics pipeline
OpenGL provides a multi-stage graphics pipeline that is partially programmable using a language called GLSL (OpenGL Shading Language) as shown below. Each of these programmable units are called shaders.


To kick off this chain, the C++ application supplies vertex data consisting of  vertices. 
The vertex data for each vertex maps to the following information:

Position
This is a mandatory input. It represents the X, Y, Z coordinate of the vertex.  As discussed earlier it contains double values.
Each position is represented as a vector of 3 doubles.

Color
This is an optional input. It represents RGBA color of the vertex. RGBA stands of Red Green Blue Alpha. The values are double values. The range goes from 0 to 1. 
To generate different colors such as fuchsia, violet etc, an unique different values are supplied to the RGB components.
However, the Alpha component represents transparency.  A value of 1 makes the vertex completely opaque and replaces background vertex. Similarly, a values 0 makes it completely transparent. Intermediate values makes the vertex blend with background. 
Each color is represented as a vector of 4 doubles. However in practise, only RGB values are sent. The alpha component is hardcoded in the Fragment shader.

Normal
This is an optional input. It represents the normal vector of the vertex. Normals are used in the lighting calculations. 
Just like Position, Normals are represented as X,Y,Z coordinates and hence represented as a vector of 3 doubles.

Texture
This is an optional input. It represents 2D texture coordinates of the vertex. 
Texture coordinates are represented as UV coordinates and hence represented as a vector of 2 doubles.

The vertex data is first fed to the vertex shader.

Vertex Shader
This stage is mandatory. A vertex shader is a graphics processing function used to add special effects to objects in a 3D environment by performing mathematical operations on the  vertex's data. Vertex Shaders don't actually change the type of data; they simply change the values of the data, so that a vertex emerges with a different color, different textures, or a different position in 3D space.

Tessellation Shader
This stage is optional and not available to OpenGL release 3.3.  After the vertex shader has processed each vertex’s associated data, the tessellation shader stage will continue processing those data, if it has been activated. Tessellation uses patches to describe an object’s shape, and allows relatively simple collections of patch geometry to be tessellated to increase the number of geometric primitives providing better-looking models. The tessellation shading stage can potentially use two shaders to manipulate the patch data and generate the final shape.

Geometry Shader
This stage is optional. Allows additional processing of individual geometric primitives, including creating new ones, before rasterization. This shading stage is also optional, but very powerful.

Rasterization
The primitive assembly stage organizes the vertices into their associated geometric primitives in preparation for clipping and rasterization. Clipping removes all pixels outside of the viewport.
After clipping, the updated primitives are sent to the rasterizer for fragment generation. Consider a fragment a candidate pixel, in that pixels have a home in the framebuffer, while a fragment still can be rejected and never update its associated pixel location. Processing of fragments occurs in the next two stages, fragment shading and per-fragment operations.

Fragment Shader
This stage is necessary for the practical reasons. Fragment shader determines the fragment’s final color , and potentially its depth value. Fragment shaders are very powerful as they often employ texture mapping to augment the colors provided by the vertex processing stages. A fragment shader may also terminate processing a fragment if it determines the fragment shouldn’t be drawn; this process is called fragment discard.

Pixel Operations
A fragment’s visibility is determined using depth testing (also commonly known as Z-buffering) and stencil testing. If a fragment successfully makes it through all of the enabled tests, it may
be written directly to the framebuffer, updating the color (and possibly depth value) of its pixel, or if blending is enabled, the fragment’s color will be combined with the pixel’s current color to generate a new color that is written into the framebuffer.

In the next post we will discuss the nitty gritty of rendering a cube.

No comments:

Post a Comment