Chapter 3. Shaders and X3D

Graphic processing unit (GPU) is given a set of 3D points (vertexes) connected into triangles. For each vertex, some work should be performed, at least to transform it from an object space into the clip space. This is where we move and rotate our objects, and apply perspective projection. Vertex shaders allow to replace this per-vertex work with a custom program written in a special shading language. When the vertexes are processed, the GPU performs rasterization, determining which screen pixels are actually covered by the triangles. Then each pixel is drawn, which involves calculating the actual pixel color. For example we can mix color from the lighting calculations with the texture color. Fragment (pixel) shaders allow to replace this per-pixel work with a custom program.

An optional geometry shader may also change the primitives between the vertex and fragment processing. We will talk about geometry shaders more in Chapter 6, Extensions for geometry shaders.

The most popular real-time shading languages right now are OpenGL GLSL [GLSL], NVidia Cg [Cg] and Direct 3D HLSL [HLSL]. They are used for the same purposes and offer practically the same possibilities. X3D, and our extensions for compositing shaders described in this paper, support all three of these languages.

The current implementation of our extensions supports only the GLSL (OpenGL Shading Language), which is probably the most natural to use in an engine based on OpenGL. As such, most of our examples in this paper will show GLSL.

Example GLSL vertex shader and accompanying fragment shader:

/* vertex shader */
void main(void)
{
  /* pass unchanged texture coordinate to the fragment shader */
  gl_TexCoord[0] = gl_MultiTexCoord0;
  /* calculate vertex position in clip space */
  gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
/* fragment shader */
/* myTexture contents are loaded outside of the shader program,
   by appropriate OpenGL calls. */
uniform sampler2D myTexture;
void main(void)
{
  /* take the texture color at given coordinates, multiply by 2,
     use it as the fragment (pixel) color */
  gl_FragColor = texture2D(myTexture, gl_TexCoord[0].st) * 2.0;
}

The shader source code should be processed and passed to the rendering 3D library, like OpenGL, that in turn will pass it to the hardware (GPU). The complexity of this operation (and the differences between various shading languages at this step) can be fortunately completely ignored by us. That is because standard X3D Programmable shaders component [X3D Shaders] gives us a simple way to attach a shader source code to a 3D shape. The X3D browser will do all the necessary job of handling the shader to the underlying libraries and hardware.

A simple working example showing X3D with GLSL shader code:

#X3D V3.2 utf8
PROFILE Interchange
Shape {
  appearance Appearance {
    shaders ComposedShader {
      language "GLSL"
      parts ShaderPart {
        type "FRAGMENT"
        url "data:text/plain,
          void main(void)
          {
            /* just draw the pixel red */
            gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
          }"
      }
    }
  }
  geometry Sphere { radius 2 }
}

A little longer example, showing X3D with the previous shader code (multiplying texture by 2):

#X3D V3.2 utf8
PROFILE Interchange
Shape {
  appearance Appearance {
    shaders ComposedShader {
      language "GLSL"
      inputOutput SFNode myTexture ImageTexture { url "test_texture.png" }
      parts [
        ShaderPart {
          type "VERTEX"
          url "data:text/plain,
            void main(void)
            {
              gl_TexCoord[0] = gl_MultiTexCoord0;
              gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
            }"
        }
        ShaderPart {
          type "FRAGMENT"
          url "data:text/plain,
            uniform sampler2D myTexture;
            void main(void)
            {
              gl_FragColor = texture2D(myTexture, gl_TexCoord[0].st) * 2.0;
            }"
        }
      ]
    }
  }
  geometry Sphere { radius 2 }
}

The shader code provided this way replaces the standard calculations done by the GPU. This means that all the lighting and texturing effects, if needed, have to be reimplemented from scratch in our shader. There is no way to combine our shader with standard rendering features and it is impossible to automatically combine two shader sources. This drawback reflects the design of the hardware. And the whole work presented in this paper strives to overcome this problem.