sincerely Singaporean

If you have not done so, read this full tutorial on how to use SGEXTN to build an application.

SGEXTN Renderer Interface Tutorial

SG - RI allows you to use custom shaders without worrying about all the nonsense that OpenGL, Vulkan, or even QRhi forces you to do. If you do not know what these are, that is excellent, since this page is designed for beginners in GPU programming.

This tutorial will guide you through programming the custom renderers that you see when you open a SGEXTN colour picker.

So what is the GPU again?

The GPU is a different chip from your CPU (which runs C++ code) that draws stuff on the screen. It can do other stuff also, but for the purposes of this tutorial, we will think of it just as a chip that draws stuff on the screen.

The GPU is completely different from the CPU, so C++ does not work there. Instead, we use GLSL to programme it.

GLSL really looks similar to C++, so if you do not know how to use it, you can just write C++ and get DeepSeek or ChatGPT to translate it for you, and you will still be able to understand it. We will assume that you either know GLSL or you are able to translate C++ to GLSL using AI tools.

The GPU draws triangles, a lot of triangles, really quickly. When drawing a triangle, it first look for the data of the triangles inside a array called a vertex buffer object (VBO). After that, it applies transformations to the vertices of the triangle to move that triangle to where it should be on the screen. The vertex shader is a GLSL programme that tells the GPU how to apply transformations to the triangles.

When it knows where each triangle should be, the GPU calculates which pixels each triangle contains. Then it runs another GLSL programme, called the fragment shader, on each pixel to determine what colour it should have.

We will start by writing the vertex shader. The vertex shader should look like this

#version 310 es precision highp float; precision highp int; layout(std140, binding = 0) uniform SG_RI_builtin_{ float x; float y; float width; float height; float windowWidth; float windowHeight; int offscreen; } SG_RI_builtin; vec4 SG_RI_transform(vec4 prelimPosition){ prelimPosition = vec4(2.0f * (prelimPosition.x * SG_RI_builtin.width / SG_RI_builtin.windowWidth + SG_RI_builtin.x / SG_RI_builtin.windowWidth) - 1.0f, -2.0f * (prelimPosition.y * SG_RI_builtin.height / SG_RI_builtin.windowHeight + SG_RI_builtin.y / SG_RI_builtin.windowHeight) + 1.0f, prelimPosition.z, prelimPosition.w); if(SG_RI_builtin.offscreen != 0){prelimPosition = vec4(prelimPosition.x, -1.0f * prelimPosition.y, prelimPosition.z, prelimPosition.w);} return prelimPosition; } // declare your input and output variables here void main(){ // write the actual code here gl_Position = SG_RI_transform(gl_Position); }

That looks really complicated...

All the code you see above is stuff that allows the rendered output to be moved inside the correct SGWidget ⁽㈳㈴㈳㈮㈱㈨㈠㈫ ㈧㈤㈱㈤⁾, instead of filling the whole screen. It looks annoying, but you just have to copy paste the same thing for literally every single vertex shader.

All the code logic that you will write is only in the positions of the 2 comments you see in the code block above. The first comment is where you should declare input and output variables, and the second comment is where you write the actual code.

Input variables are sourced from the vertex buffer object. This must include the coordinates of the vertices of the triangles.

The output variables will be passed to the fragment shader. This can be used so that a pixel knows where it is inside the SGWidget ⁽㈳㈴㈳㈮㈱㈨㈠㈫ ㈧㈤㈱㈤⁾.

Since the renderer that we are building just draws a rectangle with some colours on it, the vertex shader will do nothing except telling the GPU where the triangles should be and sending the coordinates to the fragment shader. The triangles will not be transformed.

We can replace the first comment with

layout(location = 0) in vec2 vertex; layout(location = 0) out vec2 vertexUnits;

This declares vertex to be an input variable corresponding to the coordinates of a vertex of the triangle, and vertexUnits to be an output variable that will get sent to the fragment shader.

The layout(location = 0) before the declaration of vertex means that vertex will be found at location 0 of each vertex in the vertex buffer object.

The layout(location = 0) before the declaration of vertexUnits means that vertexUnits will be found at location 0 of the output of the vertex shader, so the fragment shader should take it from location 0 later.

We replace the second command with

gl_Position = vec4(vertex.x, vertex.y, 0.0f, 1.0f); vertexUnits = vertex;

Assigning to gl_Position tells the GPU that the triangle is found at whatever the input position is (so it should not be moved). When using SG - RI, the third and fourth coordinates must always be 0.0f and 1.0f, anything else is undefined behaviour.

Setting vertexUnits to vertex passes the same variable into the fragment shader where it will actually be used.

Next we write the fragment shader to tell the GPU how exactly to colour the pixels.

#version 310 es precision highp float; precision highp int; layout(std140, binding = 0) uniform SG_RI_builtin_{ float x; float y; float width; float height; float windowWidth; float windowHeight; int offscreen; } SG_RI_builtin; layout(std140, binding = 1) uniform selection_{ float hue; float saturation; float lightness; float transparency; int type; } selection; layout(location = 0) in vec2 vertexUnits; layout(location = 0) out vec4 outColour; float computePQT(float p, float q, float t){ if(t < 0.0){t += 1.0;} else if(t > 1.0){t -= 1.0;} if(t < 1.0 / 6.0){return (p + 6.0 * t * (q - p));} if(t < 0.5){return q;} if(t < 2.0 / 3.0){return (p + 6.0 * (2.0 / 3.0 - t) * (q - p));} return p; } vec4 getRGB(float xh, float xs, float xl, float xa){ float r = 0.0; float g = 0.0; float b = 0.0; if(xs == 0.0){ r = xl; g = xl; b = xl; } else{ float q = 0.0; if(xl < 0.5){q = xl * (1.0 + xs);} else{q = xl + xs - xl * xs;} float p = 2.0 * xl - q; r = computePQT(p, q, xh + 1.0 / 3.0); g = computePQT(p, q, xh); b = computePQT(p, q, xh - 1.0 / 3.0); } return vec4(r, g, b, xa); } void main(){ float zeroPointTwo = 0.2f * SG_RI_builtin.height / SG_RI_builtin.width; float selectPoint = 0.0f; bool addTransparencyGrid = false; if(selection.type == 1){selectPoint = selection.hue;} else if(selection.type == 2){selectPoint = selection.saturation;} else if(selection.type == 3){selectPoint = selection.lightness;} else if(selection.type == 4){selectPoint = selection.transparency;} if(0.5f * (1.0f - vertexUnits.y) + abs(vertexUnits.x - (zeroPointTwo + selectPoint * (1.0f - 2.0f * zeroPointTwo))) * SG_RI_builtin.width / SG_RI_builtin.height < 0.2f){ if(selection.type == 1){outColour = getRGB(selection.hue, 1.0f, 0.5f, 1.0f);} else if(selection.type == 2){outColour = getRGB(selection.hue, selection.saturation, 0.5f, 1.0f);} else if(selection.type == 3){outColour = getRGB(selection.hue, selection.saturation, selection.lightness, 1.0f);} else if(selection.type == 4){ outColour = getRGB(selection.hue, selection.saturation, selection.lightness, selection.transparency); addTransparencyGrid = true; } } else if(vertexUnits.y <= 0.8f && vertexUnits.x > zeroPointTwo && vertexUnits.x < 1.0f - zeroPointTwo){ float x = (vertexUnits.x - zeroPointTwo) / (1.0f - 2.0f * zeroPointTwo); if(selection.type == 1){outColour = getRGB(x, 1.0f, 0.5f, 1.0f);} else if(selection.type == 2){outColour = getRGB(selection.hue, x, 0.5f, 1.0f);} else if(selection.type == 3){outColour = getRGB(selection.hue, selection.saturation, x, 1.0f);} else if(selection.type == 4){ outColour = getRGB(selection.hue, selection.saturation, selection.lightness, x); addTransparencyGrid = true; } } else{outColour = vec4(1.0f, 1.0f, 1.0f, 1.0f);} if(addTransparencyGrid == true){ float intensity = 0.8f + 0.15f * float((int(5.0f * vertexUnits.x * SG_RI_builtin.width / SG_RI_builtin.height) + int(5.0f * vertexUnits.y)) % 2); outColour = vec4(outColour.a * outColour.r + (1.0f - outColour.a) * intensity, outColour.a * outColour.g + (1.0f - outColour.a) * intensity, outColour.a * outColour.b + (1.0f - outColour.a) * intensity, 1.0f); } }

All this code determines how the pixels are actually coloured, that is why it is so long.

Pay special attention to this part

layout(location = 0) in vec2 vertexUnits; layout(location = 0) out vec4 outColour;

We declare a output variable that is a vec4. All fragment shaders should have exactly 1 output variable with type being vec4 at location 0, this is the colour that will be drawn to the corresponding pixels on the screen.

Also note how the name, type, and location of vertexUnits matches the output variable vertexUnits from the vertex shader. This allows the vertex shader output to be passed into the fragment shader correctly.

These are uniform buffer objects.

layout(std140, binding = 0) uniform SG_RI_builtin_{ float x; float y; float width; float height; float windowWidth; float windowHeight; int offscreen; } SG_RI_builtin; layout(std140, binding = 1) uniform selection_{ float hue; float saturation; float lightness; float transparency; int type; } selection;

We have a uniform buffer object at binding point 0 called SG_RI_builtin and another one at binding point 1 called selection.

The binding point indicates the location of the uniform buffer object.

The SG_RI_builtin uniform buffer object must always be present and declared in exactly that form. selection is our custom uniform buffer object which we can use to pass data to the shader.

Now that we have our shaders ready, we can add them to our project. For more information about this step, you can see this link, but essentially, they just need to be in the shaders folder.

We can place the vertex shader at path shaders/colourpicker.vert and the fragment shader is at path shaders/colourpicker.frag

After doing these, the shaders will be automatically compiled by QSB and you can access the compiled shaders at :/SGEXTN/colourpicker.vert.qsb and :/SGEXTN/colourpicker.frag.qsb respectively. This assumes that you are using "SGEXTN" for the resource system prefix. Again, see this link for more information.

Finally we can work on the C++ side.

We will need a renderer that uses GPU commands, and a syncer that sends information to the renderer. The renderer runs on a separate thread and it is not allowed to directly use information outside its thread, since that will cause data race.

Our header file should look like this

class SGRColourPickerRenderer : public SGRBaseRenderer { public: SGRColourPickerRenderer(int type); int type; float hue; float saturation; float lightness; float transparency; SGRRenderingProgramme* createRenderingProgramme() override; void initialise() override; void cleanResourcesOnDestruction() override; void uploadShaderData() override; void requestRenderCommands(SGRCommandRequest* commandRequest) override; SGRVertexBufferObject* vbo; SGRElementBufferObject* ebo; };

A few things to take note here

The renderer must inherit from SGRBaseRenderer, because SGRBaseRenderer provides everything necessary to render. It also must implement all pure virtual functions in SGRBaseRenderer.

The renderer manages its on SGRVertexBufferObject and SGRElementBufferObject. These are the vertex buffer object, and the element buffer object which tells the GPU which vertices to use for each triangle.

Also, the renderer contains its own data. This data is stored on the renderer's thread and can be safely accessed by it whenever necessary. The syncer syncs this data with what it should be on every frame.

And the code for the syncer

class SGRColourPickerSyncer : public SGRBaseSyncer { public: SGRColourPickerSyncer(); float hue; float saturation; float lightness; float transparency; void sync(SGRBaseRenderer* renderControl) override; };

Just like how the renderer must inherit from SGRBaseRenderer, the syncer must inherit from SGRBaseSyncer and implement its pure virtual functions.

The syncer also keeps its own copy of all the variables. This copy can be freely read from and written to by other parts of the programme at any time, since it is on the main thread.

Next we can work on implementing everything.

Here you will see how much easier it is to work with SG - RI as compared to other graphics frameworks.

First, the constructor of the renderer.

SGRColourPickerRenderer::SGRColourPickerRenderer(int type){ (*this).type = type; (*this).hue = 0.0f; (*this).saturation = 0.0f; (*this).lightness = 0.0f; (*this).transparency = 0.0f; (*this).vbo = nullptr; (*this).ebo = nullptr; }

This does nothing, because the renderer is created before anything GPU side even starts to set up. The actual setting up process is done in the implementation of SGRBaseRenderer::initialise.

Then we implement SGRBaseRenderer::createRenderingProgramme. This creates a SGRRenderingProgramme which can tell the GPU what to do.

SGRRenderingProgramme* SGRColourPickerRenderer::createRenderingProgramme(){ SGRRenderingProgramme* rp = new SGRRenderingProgramme(this); (*rp).setShaderQSBFiles(":/SGEXTN/colourpicker.vert.qsb", ":/SGEXTN/colourpicker.frag.qsb"); (*rp).addUniformBufferObject(20, 1); (*rp).finaliseShaderResource(); (*rp).addVertexBufferObject(2 * 4); (*rp).addVertexProperty(0, 0, 0, SGRGraphicsLanguageType::Float, 2); (*rp).finaliseVertices(); (*rp).finaliseRenderingProgramme(); return rp; }

Normally, this step is extremely complicated, but in SG - RI, it is much simpler.

We first create a SGRRenderingProgramme called rp. A pointer to the renderer is passed to indicate which renderer the SGRRenderingProgramme is associated with.

Then we set the shader files. This step tells the SGRRenderingProgramme what vertex shader and fragment shader it should use.

After that, we add a uniform buffer object using SGRRenderingProgramme::addUniformBufferObject with a size of 20 bytes and binding point 1. This is selection which we declared earlier in the fragment shader. Note how the binding point matches.

The length of a uniform buffer object depends on what it contains and can be calculated using std140 alignment rules. If you are not passing any vectors, the rules just say that the length is the sum of everything you use and they should be packed tightly in order of declaration. In our case, 4 floating point numbers and 1 int sum to 20 bytes.

Since we only need 1 uniform buffer object and no textures, we can call SGRRenderingProgramme::finaliseShaderResource after adding the uniform buffer object.

Then we add a vertex buffer object with a size of 8 bytes per vertex. SG - RI allows you to add many vertex buffer objects that can each contain a different part of the vertex, but here we only need 1.

The size per vertex must match the size of all input variables that the vertex shader receives. In our case, it is a vec2, or 2 floats, giving a length of 8 bytes.

We then use SGRRenderingProgramme::addVertexProperty on the vertex shader input variable to register it. This tells the GPU where the variable can be found (both vertex shader number and offset), which location of the vertex shader input it should be placed, and the type of the variable. In our case, the data should be sourced from offset 0 of each vertex in vertex buffer object number 0. The data need to be placed at location 0 of the vertex shader, and it is 2 floating point numbers.

Since that is all inputs we need to the vertex shader, we can call SGRRenderingProgramme::finaliseVertices. After that we call SGRRenderingProgramme::finaliseRenderingProgramme because we have done setting up the SGRRenderingProgramme.

If you are familiar with graphics terminology, this process is setting up the rendering pipeline. Note how it is so much simpler in SG - RI compared to any other graphics framework.

Our next step is to set up the vertex buffer object and element buffer object.

void SGRColourPickerRenderer::initialise(){ vbo = new SGRVertexBufferObject(this, 4 * 2 * 4); SGLArray<float> vt(0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f); (*renderingProgramme()).updateDataBuffer(vbo, 0, 4 * 2 * 4, vt.pointerToData(0)); ebo = new SGRElementBufferObject(this, 2 * 3 * 4); SGLArray<int> et(0, 1, 2, 1, 2, 3); (*renderingProgramme()).updateDataBuffer(ebo, 0, 2 * 3 * 4, et.pointerToData(0)); }

We create a SGRVertexBufferObject linked to the renderer and with a size of 4 (number of vertices) x 2 (floats per vertex) x 4 (bytes per float) bytes.

We then create the data to be put into the vertex array buffer, this is (0, 0), (1, 0), (0, 1), (1, 1). The x and y coordinates are relative to the SGWidget ⁽㈳㈴㈳㈮㈱㈨㈠㈫ ㈧㈤㈱㈤⁾ (or more specifically, the SGRRendererWidget) using this renderer. (0, 0) is the top left corner and (1, 1) is the bottom right corner.

We use SGRRenderingProgramme::updateDataBuffer to upload the data, both SGRVertexBufferObject and SGRElementBufferObject inherit from SGRDataBuffer so they fit in the argument. A pointer to the SGRRenderingProgramme being used can be gotten from SGRBaseRenderer::renderingProgramme.

Similarly, we create a element buffer object with 2 (number of triangles) x 3 (vertices per triangle) x 4 (bytes per integer) bytes to store information about how the vertices should be used. We write in 0, 1, 2, 1, 2, 3, which means that a triangle should use vertices 0, 1, 2 and the other should use vertices 1, 2, 3.

When the triangles are combined, they form a rectangle that covers the area of the associated SGWidget ⁽㈳㈴㈳㈮㈱㈨㈠㈫ ㈧㈤㈱㈤⁾ perfectly. This is called a fullscreen quad.

Next we define how to clean up when the renderer is deleted. Since the only parts that we own is the vertex buffer object and element buffer object, we just delete these.

void SGRColourPickerRenderer::cleanResourcesOnDestruction(){ delete vbo; delete ebo; }

The SGRRenderingProgramme is managed internally by SG - RI and will be deleted automatically. Freeing of GPU side memory is also done automatically.

Then we can tell the GPU how to update the uniform buffer objects. SG_RI_builtin is updated automatically so you do not need to worry about that.

To update selection, we do this

void SGRColourPickerRenderer::uploadShaderData(){ SGLArray<float> ut(hue, saturation, lightness, transparency); (*renderingProgramme()).updateShaderUniforms(1, 0, 16, ut.pointerToData(0)); (*renderingProgramme()).updateShaderUniforms(1, 16, 4, &type); }

Note how the binding point of 1 passed to SGRRenderingProgramme::updateShaderUniforms matches the binding point 1 of selection as declared in the fragment shader.

Also note how in the first call, we write bytes 0 - 15, and in the second call, we write bytes 16 - 19, filling up all 20 allocated bytes that we declared to be the length of the uniform buffer object.

Finally, time to actually draw stuff on the screen.

void SGRColourPickerRenderer::requestRenderCommands(SGRCommandRequest *commandRequest){ (*commandRequest).addVertexBufferObject(vbo, 0); (*commandRequest).chooseElementBufferObject(ebo); (*commandRequest).finaliseForDraw(); (*commandRequest).drawTriangles(2, 0); }

This makes a draw call. On every frame, we must rebind the vertex buffer objects and element buffer object. After binding everything, we need to call SGRCommandRequest::finaliseForDraw before using SGRCommandRequest::drawTriangles to actually draw the triangles.

Note that the 2 passed to SGRCommandRequest::drawTriangles is the number of triangles, not number of vertices, to draw. SG - RI only supports drawing triangles.

We only have the syncer remaining to implement. Similar to the renderer, it is created before anything on the GPU happens, so its constructor does nothing.

SGRColourPickerSyncer::SGRColourPickerSyncer(){ (*this).hue = 0.0f; (*this).saturation = 0.0f; (*this).lightness = 0.0f; (*this).transparency = 0.0f; }

In SGRBaseSyncer::sync, we simply copy over the data. This is the only safe point where data can be exchanged between the renderer and the syncer.

void SGRColourPickerSyncer::sync(SGRBaseRenderer *renderControl){ SGRColourPickerRenderer* rc = static_cast<SGRColourPickerRenderer*>(renderControl); (*rc).hue = hue; (*rc).saturation = saturation; (*rc).lightness = lightness; (*rc).transparency = transparency; }

And we are done.

To use the custom renderer, we simply create a SGRRendererWidget containing it.

SGRColourPickerSyncer* syncer = new SGRColourPickerSyncer(); new SGRRendererWidget(realBg, 0.0f, 0.5f, 0.0f, 0.5f, 1.0f, -1.0f, 0.0f, 1.25f, new SGRColourPickerRenderer(1), SGWColourPicker::hueSync);

Remember to keep a pointer to the syncer so that you can update the information to be synced each frame. After updating the information, call SGRRendererWidget::updateCustomRenderer to redraw.

©2025 05524F.sg (Singapore)

contact 05524F / report a bug / make a suggestion

about 05524F SINGAPORE values

list of 05524F projects