sincerely Singaporean

If you have not done so, read this full tutorial on how to use SGEXTN to build an application.

Full Tutorial Part 17

See here for the previous part of the tutorial.

In the previous part, we built the circle display.

shaders (part 2)

In this part of the tutorial, we will build the polygon display.

Unlike the circle display which used the fragment shader to determine the circle's shape, this time we will be using actual vertices.

We will use the same coordinate system for the vertices as what we used in the fragment shader of the circle display. The polygon will have its vertices all on the circle centered at the origin with a radius of 0.75

Since the geometry will not cover the background, the background colour will be drawn using a SGWBlankWidget with a custom background colour.

Firstly we write the vertex shader in a file at shaders/polygon.vert

#version 310 es precision highp float; precision highp int; layout(std140, binding = 0) uniform SG_RI_builtin_{ float x; float y; float width; float height; float windowWidth; float windowHeight; int offscreen; } SG_RI_builtin; vec4 SG_RI_transform(vec4 prelimPosition){ prelimPosition = vec4(2.0f * (prelimPosition.x * SG_RI_builtin.width / SG_RI_builtin.windowWidth + SG_RI_builtin.x / SG_RI_builtin.windowWidth) - 1.0f, -2.0f * (prelimPosition.y * SG_RI_builtin.height / SG_RI_builtin.windowHeight + SG_RI_builtin.y / SG_RI_builtin.windowHeight) + 1.0f, prelimPosition.z, prelimPosition.w); if(SG_RI_builtin.offscreen != 0){prelimPosition = vec4(prelimPosition.x, -1.0f * prelimPosition.y, prelimPosition.z, prelimPosition.w);} return prelimPosition; } layout(location = 0) in vec2 vertex; void main(){ vec2 vertexCoords = vec2(0.5f * vertex.x, 0.5f * vertex.y); if(SG_RI_builtin.width > SG_RI_builtin.height){vertexCoords = vec2(vertexCoords.x * SG_RI_builtin.height / SG_RI_builtin.width, vertexCoords.y);} else{vertexCoords = vec2(vertexCoords.x, vertexCoords.y * SG_RI_builtin.width / SG_RI_builtin.height);} vertexCoords += vec2(0.5f, 0.5f); gl_Position = vec4(vertexCoords.x, vertexCoords.y, 0.0f, 1.0f); gl_Position = SG_RI_transform(gl_Position); }

This essentially converts the coordinate system that the vertices are in to the coordinate system used by SG - RI. The code block is really long, but most of it is simple copy pasting that must be done for every SG - RI vertex shader.

Next we write the fragment shader.

This is even simpler, as it literally just displays whatever colour that the uniform is set to use.

#version 310 es precision highp float; precision highp int; layout(std140, binding = 1) uniform data_{ vec4 foregroundColour; } data; layout(location = 0) out vec4 outColour; void main(){ outColour = data.foregroundColour; }

Then in the header file include/SGCLPPolygonDisplay.h we can define the SGCLPPolygonDisplay class inheriting from SGRBaseRenderer.

class SGCLPPolygonDisplay : public SGRBaseRenderer { public: SGCLPPolygonDisplay(int vertexCount, SGXColourRGBA fg); int vertexCount; SGXColourRGBA foregroundColour = {}; SGRRenderingProgramme* createRenderingProgramme() override; void initialise() override; void cleanResourcesOnDestruction() override; void uploadShaderData() override; void requestRenderCommands(SGRCommandRequest* commandRequest) override; SGRVertexBufferObject* vbo; SGRElementBufferObject* ebo; };

We then implement everything.

SGCLPPolygonDisplay::SGCLPPolygonDisplay(int vertexCount, SGXColourRGBA fg){ (*this).foregroundColour = fg; (*this).vertexCount = vertexCount; (*this).vbo = nullptr; (*this).ebo = nullptr; } SGRRenderingProgramme* SGCLPPolygonDisplay::createRenderingProgramme(){ SGRRenderingProgramme* rp = new SGRRenderingProgramme(this); (*rp).setShaderQSBFiles(":/ColoursPlusPlus/polygon.vert.qsb", ":/ColoursPlusPlus/polygon.frag.qsb"); (*rp).addUniformBufferObject(16, 1); (*rp).finaliseShaderResource(); (*rp).addVertexBufferObject(2 * 4); (*rp).addVertexProperty(0, 0, 0, SGRGraphicsLanguageType::Float, 2); (*rp).finaliseVertices(); (*rp).finaliseRenderingProgramme(); return rp; } void SGCLPPolygonDisplay::cleanResourcesOnDestruction(){ delete vbo; delete ebo; } void SGCLPPolygonDisplay::uploadShaderData(){ } void SGCLPPolygonDisplay::requestRenderCommands(SGRCommandRequest *commandRequest){ (*commandRequest).addVertexBufferObject(vbo, 0); (*commandRequest).chooseElementBufferObject(ebo); (*commandRequest).finaliseForDraw(); (*commandRequest).drawTriangles(vertexCount, 0); }

These do essentially the same thing as what you have seen in the implementation of SGCLPCircleDisplay, not very interesting.

The interesting part of SGCLPPolygonDisplay is in the SGCLPPolygonDisplay::initialise function where the vertices are generated.

void SGCLPPolygonDisplay::initialise(){ SGLArray<float> vt(2 * (vertexCount + 1)); vt.at(0) = 0.0f; vt.at(1) = 0.0f; for(int i=0; i<vertexCount; i++){ float angle = -0.5f * SGLFloatConstants::pi() + 2.0f * SGLFloatConstants::pi() /static_cast<float>(vertexCount) * static_cast<float>(i); vt.at(2 + 2 * i) = 0.75f * SGLFloatMath::cosine(angle); vt.at(3 + 2 * i) = 0.75f * SGLFloatMath::sine(angle); } vbo = new SGRVertexBufferObject(this, 4 * 2 * (vertexCount + 1)); (*renderingProgramme()).updateDataBuffer(vbo, 0, 4 * 2 * (vertexCount + 1), vt.pointerToData(0)); SGLArray<int> et(3 * vertexCount); for(int i=0; i<vertexCount; i++){ et.at(3 * i) = 0; et.at(3 * i + 1) = i + 1; et.at(3 * i + 2) = i + 2; } et.at(3 * vertexCount - 1) = 1; ebo = new SGRElementBufferObject(this, 4 * 3 * vertexCount); (*renderingProgramme()).updateDataBuffer(ebo, 0, 4 * 3 * vertexCount, et.pointerToData(0)); SGLArray<float> ut(foregroundColour.getRedAsFloat(), foregroundColour.getGreenAsFloat(), foregroundColour.getBlueAsFloat(), foregroundColour.getTransparencyAsFloat()); (*renderingProgramme()).updateShaderUniforms(1, 0, 16, ut.pointerToData(0)); }

This function can roughly be divided into 3 parts. The first part generates the vertex buffer object, the second part generates the element buffer object, and the last part uploads the uniforms. We will focus on the first 2 parts.

First part:

SGLArray<float> vt(2 * (vertexCount + 1)); vt.at(0) = 0.0f; vt.at(1) = 0.0f; for(int i=0; i<vertexCount; i++){ float angle = -0.5f * SGLFloatConstants::pi() + 2.0f * SGLFloatConstants::pi() /static_cast<float>(vertexCount) * static_cast<float>(i); vt.at(2 + 2 * i) = 0.75f * SGLFloatMath::cosine(angle); vt.at(3 + 2 * i) = 0.75f * SGLFloatMath::sine(angle); }

Here vt is a SGLArray containing all the vertices to be uploaded into the vertex buffer object, with 2 numbers for each vertex.

Vertex 0 is the origin, while the remaining vertices are like what you would expect, vertices around the perimeter of the regular polygon. Wait... but a regular polygon does not have a vertex at its center. Is this code not passing 1 more vertex than necessary and thus is being inefficient?

To understand the reason for this design, we need to look at how the polygon is actually being drawn. The GPU can only draw triangles, nothing else. So to draw a polygon, we need to divide it into many triangles. This process is called triangulation.

In a good triangulation, most triangles produced should be roughly equilateral and triangles should not vary in size too much. Using a bad triangulation can lead to visual artifacts when textures are being used. Although we are not using textures here, it is a good habit to use good triangulations.

How exactly the triangles are drawn on the screen are determined by the element buffer object, where each group of 3 numbers defines a triangle connecting the 3 vertices at these index numbers.

SGLArray<int> et(3 * vertexCount); for(int i=0; i<vertexCount; i++){ et.at(3 * i) = 0; et.at(3 * i + 1) = i + 1; et.at(3 * i + 2) = i + 2; } et.at(3 * vertexCount - 1) = 1;

From the code in the second part of the implementation, we can see that the triangulation being used slices the polygon into identical triangles around the center. This is not perfect, but better than slicing it into triangles around a vertex. When a good triangulation is absolutely necessary (when you use textures), you can consider using the Delaunay Triangulation or Constrained Delaunay Triangulation.

Now we can add these lines to SGCLPDisplayPage::initialise

else if(SGCLPOptionsPage::chosenPattern == SGCLPOptionsPage::Polygon){ SGWBlankWidget* w = new SGWBlankWidget(bg, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, -1.0f); (*w).setColour(SGCLPOptionsPage::chosenBackgroundColour); new SGRRendererWidget(bg, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, -1.0f, new SGCLPPolygonDisplay(SGCLPOptionsPage::chosenVertexCount, SGCLPOptionsPage::chosenForegroundColour), nullptr); }

Apart from creating a SGRRendererWidget to display the renderer, this also creates a SGWBlankWidget and uses SGWBlankWidget::setColour to set its colour from a SGXColourRGBA. All colours on SGWidget ⁽㈳㈴㈳㈮㈱㈨㈠㈫ ㈧㈤㈱㈤⁾ UI elements can be fully customised using the theme system and by directly inputting the colour.

Testing shows that the SGRRendererWidget is working as expected, so we can run a clang-tidy check and commit to GitHub.

See here for the next part of the tutorial.

©2025 05524F.sg (Singapore)

contact 05524F / report a bug / make a suggestion

about 05524F SINGAPORE values

list of 05524F projects