Texturing component - extensions


1. Bump mapping (normalMap, heightMap, heightMapScale fields of Appearance)

We add to the Appearance node new fields useful for bump mapping:

Appearance : X3DAppearanceNode {
  ... all previous Appearance fields ...
  SFNode     [in,out]      normalMap        NULL        # only 2D texture nodes (ImageTexture, MovieTexture, PixelTexture) allowed
  SFNode     [in,out]      heightMap        NULL        # deprecated; only 2D texture nodes (ImageTexture, MovieTexture, PixelTexture) allowed
  SFFloat    [in,out]      heightMapScale   0.01        # must be > 0
Leaf (without bump mapping)
Leaf (with bump mapping)
Lion texture (without parallax mapping)
Lion texture (with parallax mapping)

RGB channels of the texture specified as normalMap describe normal vector values of the surface. Normal vectors are encoded as colors: vector (x, y, z) should be encoded as RGB((x+1)/2, (y+1)/2, (z+1)/2).

You can use e.g. GIMP normalmap plugin to generate such normal maps from your textures. Hint: Remember to check "invert y" when generating normal maps, in image editing programs image Y grows down but we want Y (as interpreted by normals) to grow up, just like texture T coordinate.

Such normal map is enough to use the classic bump mapping method, and already enhances the visual look of your scene. For most effective results, you can place some dynamic light source in the scene — the bump mapping effect is then obvious.

You can additionally specify a height map. Since version 3.10.0 of view3dscene (2.5.0 of engine), this height map is specified within the alpha channel of the normalMap texture. This leads to easy and efficient implementation, and also it is easy for texture creators: in GIMP normal map plugin just set "Alpha Channel" to "Height". A height map allows to use more sophisticated parallax bump mapping algorithm, actually we have a full steep parallax mapping with self-shadowing implementation. This can make the effect truly amazing, but also slower.

If the height map (that is, the alpha channel of normalMap) exists, then we also look at the heightMapScale field. This allows you to tweak the perceived height of bumps for parallax mapping.

Since version 3.10.0 of view3dscene (2.5.0 of engine), new shader pipeline allows the bump mapping to cooperate with all normal VRML/X3D lighting and multi-texturing settings. So the same lights and textures are used for bump mapping lighting equations, only they have more interesting normals.

Note that bump mapping only works if you also assigned a normal (2D) texture to your shape. We assume that normal map and height map is mapped on your surface in the same way (same texture coordinates, same texture transform) as the first texture (in case of multi-texturing).


Note: you can also use these fields within KambiAppearance node instead of Appearance. This allows you to declare KambiAppearance by EXTERNPROTO, that fallbacks on standard Appearance, and thus bump mapping extensions will be gracefully omitted by other browsers. See VRML/X3D demo models for examples.

2. Texture automatically rendered from a viewpoint (RenderedTexture node)

RenderedTexture demo
RenderedTexture with background and mirrors thrown in

Texture rendered from a specified viewpoint in the 3D scene. This can be used for a wide range of graphic effects, the most straighforward use is to make something like a "security camera" or a "portal", through which a player can peek what happens at the other place in 3D world.

RenderedTexture : X3DTextureNode {
  SFNode     [in,out]      metadata              NULL             # [X3DMetadataObject]
  MFInt32    [in,out]      dimensions            128 128 4 1 1  
  SFString   [in,out]      update                "NONE"           # ["NONE"|"NEXT_FRAME_ONLY"|"ALWAYS"]
  SFNode     [in,out]      viewpoint             NULL             # [X3DViewpointNode] (VRML 1.0 camera nodes also allowed)
  SFNode     []            textureProperties     NULL             # [TextureProperties]
  SFBool     []            repeatS               TRUE           
  SFBool     []            repeatT               TRUE           
  SFBool     []            repeatR               TRUE           
  MFBool     [in,out]      depthMap              []             
  SFMatrix4f [out]         viewing                              
  SFMatrix4f [out]         projection                           
  SFBool     [out]         rendering                            

First two numbers in "dimensions" field specify the width and the height of the texture. (Our current implementation ignores the rest of dimensions field.)

"update" is the standard field for automatically generated textures (works the same as for GeneratedCubeMapTexture or GeneratedShadowMap). It says when to actually generate the texture: "NONE" means never, "ALWAYS" means every frame (for fully dynamic scenes), "NEXT_FRAME_ONLY" says to update at the next frame (and afterwards change back to "NONE").

"viewpoint" allows you to explicitly specify viewpoint node from which to render to texture. Default NULL value means to render from the current camera (this is equivalent to specifying viewpoint node that is currently bound). Yes, you can easily see recursive texture using this, just look at the textured object. It's quite fun :) (It's not a problem for rendering speed — we always render texture only once in a frame.) You can of course specify other viewpoint node, to make rendering from there.

"textureProperties" is the standard field of all texture nodes. You can place there a TextureProperties node to specify magnification, minification filters (note that mipmaps, if required, will always be correctly automatically updated for RenderedTexture), anisotropy and such.

"repeatS", "repeatT", "repeatR" are also standard for texture nodes, specify whether texture repeats or clamps. For RenderedTexture, you may often want to set them to FALSE. "repeatR" is for 3D textures, useless for now.

"depthMap", if it is TRUE, then the generated texture will contain the depth buffer of the image (instead of the color buffer as usual). (Our current implementation only looks at the first item of MFBool field depthMap.)

"rendering" output event sends a TRUE value right before rendering to the texture, and sends FALSE after. It can be useful to e.g. ROUTE this to a ClipPlane.enabled field. This is our (Kambi engine) extension, not present in other implementations. In the future, "scene" field will be implemented, this will allow more flexibility, but for now the simple "rendering" event may be useful.

"viewing" and "projection" output events are also send right before rendering, they contain the modelview (camera) and projection matrices.

TODO: "scene" should also be supported. "background" and "fog" also. And the default background / fog behavior should change? To match the Xj3D, by default no background / fog means that we don't use them, currently we just use the current background / fog.

This is mostly compatible with InstantReality RenderedTexture and Xj3D, We do not support all InstantReality fields, but the basic fields and usage remain the same.

3. Generate texture coordinates on primitives (Box/Cone/Cylinder/Sphere/Extrusion/Text.texCoord)

We add a texCoord field to various VRML/X3D primitives. You can use it to generate texture coordinates on a primitive, by the TextureCoordinateGenerator node (for example to make mirrors), or (for shadow maps) ProjectedTextureCoordinate.

You can even use multi-texturing on primitives, by MultiGeneratedTextureCoordinate node. This works exactly like standard MultiTextureCoordinate, except only coordinate-generating children are allowed.

Note that you cannot use explicit TextureCoordinate nodes for primitives, because you don't know the geometry of the primitive. For a similar reason you cannot use MultiTextureCoordinate (as it would allow TextureCoordinate as children).

Box / Cone / Cylinder / Sphere / Extrusion / Text {
  SFNode     [in,out]      texCoord    NULL        # [TextureCoordinateGenerator, ProjectedTextureCoordinate, MultiGeneratedTextureCoordinate]
MultiGeneratedTextureCoordinate : X3DTextureCoordinateNode {
  SFNode     [in,out]      metadata    NULL        # [X3DMetadataObject]
  SFNode     [in,out]      texCoord    NULL        # [TextureCoordinateGenerator, ProjectedTextureCoordinate]

Note: MultiGeneratedTextureCoordinate is not available in older view3dscene <= 3.7.0..

4. Generating 3D tex coords in world space (easy mirrors by additional TextureCoordinateGenerator.mode values)

Teapot with cube map reflections

TextureCoordinateGenerator.mode allows two additional generation modes:

  1. WORLDSPACEREFLECTIONVECTOR: Generates reflection coordinates mapping to 3D direction in world space. This will make the cube map reflection simulating real mirror. It's analogous to standard "CAMERASPACEREFLECTIONVECTOR", that does the same but in camera space, making the mirror reflecting mostly the "back" side of the cube, regardless of how the scene is rotated.

  2. WORLDSPACENORMAL: Use the vertex normal, transformed to world space, as texture coordinates. Analogous to standard "CAMERASPACENORMAL", that does the same but in camera space.

These nodes are extremely useful for making mirrors. See Cube map environmental texturing component and our VRML/X3D demo models for examples.

5. Tex coord generation dependent on bounding box (TextureCoordinateGenerator.mode = BOUNDS*)

Three more values for TextureCoordinateGenerator.mode:

  1. BOUNDS: Automatically generate nice texture coordinates, suitable for 2D or 3D textures. This is equivalent to either BOUNDS2D or BOUNDS3D, depending on what type of texture is actually used during rendering.
  2. BOUNDS2D: Automatically generate nice 2D texture coordinates, based on the local bounding box of given shape. This texture mapping is precisely defined by the VRML/X3D standard at IndexedFaceSet description.
  3. BOUNDS3D: Automatically generate nice 3D texture coordinates, based on the local bounding box of given shape. This texture mapping is precisely defined by the VRML/X3D standard at Texturing3D component, section "Texture coordinate generation for primitive objects".

Following VRML/X3D standards, above texture mappings are automatically used when you supply a texture but no texture coordinates for your shape. Our extensions make it possible to also explicitly use these mappgins, when you really want to explicitly use TextureCoordinateGenerator node. This is useful when working with multi-texturing (e.g. one texture unit may have BOUNDS mapping, while the other texture unit has different mapping).

6. Override alpha channel detection (field alphaChannel for ImageTexture, MovieTexture and other textures)

Demo of alphaChannel override

Our engine detects the alpha channel type of every texture automatically. There are three possible situations:

  1. The texture has no alpha channel (it is always opaque), or
  2. the texture has simple yes-no alpha channel (transparency rendered using alpha testing), or
  3. the texture has full range alpha channel (transparency rendered by blending, just like partially transparent materials).

The difference between yes-no and full range alpha channel is detected by analyzing alpha channel values. Developers: see AlphaChannel method reference, default tolerance values used by X3D renderer are 5 and 0.01. There is also a special program in engine sources (see examples/images_videos/image_identify.lpr demo) if you want to use this algorithm yourself. You can also see the results for your textures if you run view3dscene with --debug-log option.

Sometimes you want to override results of this automatic detection. For example, maybe your texture has some pixels using full range alpha but you still want to use simpler rendering by alpha testing (that doesn't require sorting, and works nicely with shadow maps).

If you modify the texture contents at runtime (for example by scripts, like demo_models/castle_script/edit_texture.x3dv in demo models) you should also be aware that alpha channel detection happens only once. It is not repeated later, as this would be 1. slow 2. could cause weird rendering changes. In this case you may also want to force a specific alpha channel treatment, if initial texture contents are opaque but you want to later modify it's alpha channel.

To enable this we add new field to all texture nodes (everything descending from X3DTextureNode, like ImageTexture, MovieTexture; also Texture2 in VRML 1.0):

X3DTextureNode {
  ... all normal X3DTextureNode fields ...
  SFString   []            alphaChannel  "AUTO"      # "AUTO", "NONE", "TEST" or "BLENDING"

Value AUTO means that automatic detection is used, this is the default. Other values force the specific alpha channel treatment and rendering, regardless of initial texture contents.

7. Movies for MovieTexture can be loaded from images sequence

Fireplace demo screenshot
This movie shows how it looks animated.

Inside MovieTexture nodes, you can use an URL like my_animation_@counter(1).png to load movie from a sequence of images. This will load a series of images. We will substitute @counter(<padding>) with successive numbers starting from 0 or 1 (if filename my_animation_0.png exists, we use it; otherwise we start from my_animation_1.png).

The paramter inside @counter(<padding>) macro specifies the padding. The number will be padded with zeros to have at least the required length. For example, @counter(1).png results in names like 1.png, 2.png, ..., 9.png, 10.png... While @counter(4).png results in names like 0001.png, 0002.png, ..., 0009.png, 0010.png, ...

A movie loaded from image sequence will always run at the speed of 25 frames per second. (Developers: if you use a class like TGLVideo2D to play movies, you can customize the TGLVideo2D.FramesPerSecond property.)

A simple image filename (without @counter(<padding>) macro) is also accepted as a movie URL. This just loads a trivial movie, that consists of one frame and is always still...

Allowed image formats are just like everywhere in our engine — PNG, JPEG and many others, see glViewImage docs for the list.

Besides the fact that loading image sequence doesn't require ffmpeg installed, using image sequence has also one very important advantage over any other movie format: you can use images with alpha channel (e.g. in PNG format), and MovieTexture will be rendered with alpha channel appropriately. This is crucial if you want to have a video of smoke or flame in your game, since such textures usually require an alpha channel.

Samples of MovieTexture usage are inside our VRML/X3D demo models, in subdirectory movie_texture/.

8. Texture for GUI (TextureProperties.guiTexture)

TextureProperties {
  SFBool     []            guiTexture  FALSE     

When the guiTexture field is TRUE, the texture is not forced to have power-of-two size, and it never uses mipmaps. Good for GUI stuff, or other textures where forcing power-of-two causes unacceptable loss of quality (and it's better to resign from mipmaps).