Once you have a large game, with many large 3D models, you will probably start to wonder about the speed and memory usage.
The main thing that measures your game speed is the Frames Per Second.
Engine automatically keeps track of this for you.
to get an instance of
and inside you have two important numbers:
We will explain the
How to show them? However you like:
TCastleWindow, you can trivially enable
TCastleWindow.FpsShowOnCaptionto show FPS on your window caption.
Writeln— don't call it too often, or your rendering will be slower. It's simplest to use TCastleTimer or Lazarus
TTimerto update it e.g. only once per second. Actually, these properties show you an average from last second, so there's not even a reason to redraw them more often.
TGame2DControlsin earlier chapter).
There are two FPS values available: frame time and real time. Frame time is usually the larger one. Larger is better, of course: it means that you have smoother animation.
Use "real time" to measure your overall game speed. This is the actual number of frames per second that we managed to render. Caveats:
Make sure to turn off "limit FPS" feature, to get maximum number available. Use view3dscene "Preferences -> Frames Per Second" menu item, or (in your own programs) change LimitFPS global variable (if you use CastleControl unit with Lazarus) or change Application.LimitFPS (if you use CastleWindow unit). Change them to zero to disable the "limit fps" feature.
Make sure to have an animation that constantly updates your
Otherwise, we will not refresh the screen (no point to
redraw the same thing), and "real time" will drop to almost zero if
you look at a static scene.
Since the engine version 6.0.0,
TCastleWindow.AutoRedisplay is by default
true, so this is already OK by default.
Note that the monitor will actually drop some frames above it's frequency, like 80. This may cause you to observe that above some limit, FPS are easier to gain by optimizations, which may lead you to a false judgement about which optimizations are more useful than others. To make a valuable judgement about what is faster/slower, always compare two versions of your program when only the relevant thing changed — nothing else.
"Frame time" measures how much frames we
would get, if we ignore the time spent outside
It's often useful to compare it with "real
LimitFPS feature turned off),
as it may then tell you whether the
bottleneck is in rendering or outside of rendering (like collision
detection and creature AI). Caveats:
Modern GPUs work in parallel to the CPU. So "how much time CPU spent in OnRender" doesn't necessarily relate to "how much time GPU spent on performing your drawing commands".
So making your CPU busy with something else (like collisions, or waiting) makes your "frame time" lower, while in fact rendering time is the same — you're just not clogging you GPU. Which is a good thing, actually, if your game can spend this time on something useful like collisions. Just don't overestimate it — you didn't make rendering faster, but you managed to do a useful work in the meantime.
For example: if you set
LimitFPS to a small value, you may observe
that "frame time" grows higher. Why? Because when the CPU is idle
(which is often if
LimitFPS is small), then GPU has a free time to
finish rendering previous frame. So the GPU does the work for free,
OnRender time, when your CPU is busy with something
else. OTOH when CPU works on producing new frames, then you have to
OnRender until previous frame finishes.
In other words, improvements to "frame time" must be taken with a
grain of salt. We spend less time in
OnRender event: this does not
necessarily mean that we really render faster.
Still, often "frame time" does reflect the speed of GPU rendering.
If you turn off
LimitFPS, and compare "frame time" with
you can see how much time was spent outside
OnRender. Usually, "frame
time" will be close to "real time". If the gap is large, it may mean
that you have a bottleneck in non-rendering code (like collision
detection and creature AI).
First of all, watch the number of vertexes and faces of the models you load. Use view3dscene menu item Help -> Scene Information for this.
Graphic effects dealing with dynamic and detailed lighting, like shadows or bump mapping, have a cost. So use them only if necessary. In case of static scenes, try to "bake" such lighting effects to regular textures (use e.g. Blender Bake functionality), instead of activating a costly runtime effect.
Both Lazarus and our build tool support the idea of "build modes".
When you're in the middle of the development and you're testing the game for bugs,
that adds a lot of run-time checks to your code. This allows to get
a clear and nice error when you e.g. access an invalid array index.
If you use our
just pass the
--mode=debug command-line parameter to it.
Our vectors are also like arrays, so doing stuff like
MyVector := 123.0;
is also checked (it's valid if
MyVector is a 3D or 4D vector, invalid if it's a 2D vector).
Actually, this simple case is checked at compile-time with the new vector API
in Castle Game Engine 6.3,
but more convoluted cases are still checked at run-time.
When you need the maximum speed (when you want to build a "final"
version for the player, or when you check / compare / profile the speed),
always use the
The code runs much faster in release mode. The speed difference may be really noticeable. As of Castle Game Engine 6.3, the ray-tracer is 1.9 times slower in development mode vs release mode. The speed differences of a typical game are usually not that drastic (since you don't spend 100% of your time calculating math expressions, unlike a ray-tracer), but significant differences are still expected, esp. if you measure the performance of a particular calculation (not just looking at game FPS).
So in most cases it's really important that you measure the speed only of the release build of your game, and this is the version that you want to provide to your players.
If the player can see the geometry faces only from one side,
then backface culling should be on.
This is the default case (X3D nodes like
solid field equal
TRUE by default).
It avoids useless drawing of the other side of the faces.
Optimize textures to increase the speed and lower GPU memory usage:
Appearanceacross many X3D shapes, if possible). This avoids texture switching when rendering, so the scene renders faster. When exporting from Spine, be sure to use atlases.
TSpriteclass) instead of separate images (like
TGLVideo2Dclass). This again avoids texture switching when rendering, making the scene render faster. It also allows to easily use any texture size (not necessarily a power of two) for the frame size, and still compress the whole sprite, so it cooperates well with texture compression.
TextureProperties.anisotropicDegreeif not needed.
anisotropicDegreeshould only be set to values > 1 when it makes a visual difference in your case.
There are some
TCastleScene features that are usually turned on,
but in some special cases may be avoided:
ProcessEventsif the scene should remain static.
Scene.Spatialif you don't need better collisions than versus scene bounding box.
Scene.Spatialif the scene is always small on the screen, and so it's usually either completely visible or invisible.
ssRenderingadds frustum culling per-shape.
Various techniques to optimize animations include:
If your model has animations but is often not visible (outside
of view frustum), then consider using
Scene.AnimateOnlyWhenVisible := true
If the model is small, and not updating it's animations every frame will not be noticeable, then consider setting
to something larger than 0 (try 1 or 2).
For some games, turning globally
OptimizeExtensiveTransformations := true improves the speed. This works best when you animate multiple
Transform nodes within every X3D scene, and some of these animated
Transform nodes are children of other animated
Transform nodes. A typical example is a skeleton animation, for example from Spine, with non-trivial bone hierarchy, and with multiple bones changing position and rotation every frame.
TCastlePrecalculatedAnimation to "bake" animation from events as a series of static scenes. This makes sense if your animation is from Spine or X3D exported from some software that understands X3D interpolation nodes.
Note that there's no point doing this if your animation is from castle-anim-frames or M3D, they are already "baked". Although this baking will become optional (not forced) in the future.
TODO: The API for "baking" should use TNodeInterpolator, not deprecated
Watch out what you're changing in the X3D nodes. Most changes, in particular the ones that can be achieved by sending X3D events (these changes are kind of "suggested by the X3D standard" to be optimized) are fast. But some changes are very slow, cause rebuilding of scene structures, e.g. reorganizing X3D node hierarchy. So avoid doing it during game. To detect this, set
LogSceneChanges := true and watch log (see manual chapter "Logging") for lines saying "ChangedAll" - these are costly rebuilds, avoid them during the game!
Modern GPUs can "consume" a huge number of vertexes very fast, as long as they are provided to them in a single "batch" or "draw call".
In our engine, the "shape" is the unit of information we provide to GPU. It is simply a VRML/X3D shape. In most cases, it also corresponds to the 3D object you design in your 3D modeler, e.g. Blender 3D object in simple cases is exported to a single VRML/X3D shape (although it may be split into a couple of shapes if you use different materials/textures on it, as VRML/X3D is a little more limited (and also more GPU friendly)).
The general advice is to compromise:
Do not make too many too trivial shapes. Do not make millions of shapes with only a few vertexes — each shape will be provided in a separate VBO to OpenGL, which isn't very efficient.
Do not make too few shapes. Each shape is passed as a whole to OpenGL (splitting shape on the fly would cause unacceptable slowdown), and shapes may be culled using frustum culling or occlusion queries. By using only a few very large shapes, you make this culling worthless.
A rule of thumb is to keep your number of shapes in a scene between 100 and 1000. But that's really just a rule of thumb, different level designs will definitely have different considerations.
You can also look at the number of triangles in your shape. Only a few triangles for a shape is not optimal — we will waste resources by creating a lot of VBOs, each with only a few triangles (the engine cannot yet combine the shapes automatically). Instead, merge your shapes — to have hundreds or thousands of triangles in a single shape.
You usually do not need to create too many
To reduce memory usage, you can place the same
TCastlePrecalculatedAnimation) instance many times within
SceneManager.Items, usually wrapped in a different
T3DTransform. The whole code is ready for such "multiple uses" of a single scene instance.
For an example of this approach, see frogger3d game (in particular, it's main unit game.pas). The game adds hundreds of 3D objects to
SceneManager.Items, but there are only three
TCastleScene instances (player, cylinder and level).
To improve the speed, you can often combine many
TCastleScene instances into one. To do this, load your 3D models to
Load3D, and then create a new single
TX3DRootNode instance that will have many other nodes as children. That is, create one new
TX3DRootNode to keep them all, and for each scene add it's
TX3DRootNode (wrapped in
TTransformNode) to that single
TX3DRootNode. This allows you to load multiple 3D files into a single
TCastleScene, which may make stuff faster — octrees (used for collision routines and frustum culling) will work Ok. Right now, we have an octree only inside each TCastleScene, so it's not optimal to have thousands of TCastleScene instances with collision detection.
We build an octree (looking at exact triangles in your 3D model) for precise collision detection with a level. For other objects, we use bounding volumes like boxes and spheres. This means that the number of shapes doesn't matter much for collision speed. However, number of triangles still matters for level.
Collision node to easily mark unneeded shapes as
non-collidable or to provide a simpler "proxy" mesh to use for
collisions with complicated objects. See
inside our demo VRML/X3D models.
It's really trivial
in X3D, and we support it 100% — I just wish there was a way to
easily set it from 3D modelers like Blender. Hopefully we'll get
better X3D exporter one day. Until them, you can hack X3D source, it's
quite easy actually. And thanks to using X3D Inline node, you can keep
your auto-generated X3D content separated from hand-written X3D code
— that's the reason for xxx_final.x3dv and xxx.x3d pairs of
files around the demo models.
To wrap something in simpler collisions in code,
you can build appropriate
Collision node by code.
See helpers like
You can adjust the parameters how the octree is created. You can set octree parameters in VRML/X3D file or by ObjectPascal code. Although in practice I usually find that the default values are really good.
Avoid any loading (from disk to normal memory, or from normal memory to GPU memory) once the game is running. Doing this during the game will inevitably cause a small stutter, which breaks the smoothness of the gameplay. Everything necessary should be loaded at the beginning, possibly while showing some "loading..." screen to the user. Use
TCastleScene.PrepareResources to load everything referenced by your scenes to GPU.
Enable some (or all) of these flags to get extensive information in the log about all the loading that is happening:
Beware: This is usually a lot of information, so you probably don't want to see it always. Dumping this information to the log will often cause a tremendous slowdown during loading stage, so do not bother to measure your loading speed when any of these flags are turned on. Use these flags only to detect if something "fishy" is happening during the gameplay.
The engine allows you to easily define custom culling methods or use hardware occlusion query (see examples and docs). This may help a lot in large scenes (city or indoors).
We use alpha blending to render partially transparent
shapes. Blending is used automatically if you have a texture with
a smooth alpha channel, or if your
is less than 1.
Note: Just because your texture has some alpha channel,
it doesn't mean that we use blending. By default, the engine
analyses the alpha channel contents, to determine whether it indicates
alpha blending (smooth alpha channel), alpha testing (all alpha
values are either "0" or "1"), or maybe it's opaque (all alpha values equal "1").
can always explicitly specify the texture alpha channel treatment using the
alphaChannel field in X3D.
Rendering blending is a little costly, in a general case. The transparent shapes have to be sorted every frame. Hints to make it faster:
If possible, do not use many transparent shapes. This will keep the cost of sorting minimal.
If possible, turn off the sorting, using
Scene.Attributes.BlendingSort := bsNone. See TBlendingSort for the explanation of possible
BlendingSort values. Sorting is only necessary if you may see multiple partially-transparent shapes on the same screen pixel, otherwise sorting is a waste of time.
Sorting is also not necessary if you use some blending modes that make the order of rendering partially-transparent shapes irrelevant. For example, blending mode with
srcFactor = "src_alpha" and
destFactor = "one". You can use a
blendMode field in X3D to set a specific blending mode. Of course, it will look differently, but maybe acceptably?
So, consider changing the blending mode and then turning off sorting.
Finally, consider do you really need transparency by blending. Maybe you can work with a transparency by alpha testing? Alpha testing means that every pixel is either opaque, or completely transparent, depending on the alpha value. It's much more efficient to use, as alpha tested shapes can be rendered along with the normal, completely opaque shapes, and only the GPU cares about the actual "testing". There's no need for sorting. Also, alpha testing cooperates nicely with shadow maps.
Whether the alpha testing looks good depends on your use-case, on your textures.
To use alpha-testing, you can:
alphaChannel "TEST"in X3D
You can use any FPC tool to profile your code, for memory and
speed. There's a small document about it in engine sources, see
(TODO: it should be moved to our wiki at some point).
See also FPC
wiki about profiling.
To detect memory leaks, we advice to regularly compile your code with
There are many ways to do this, for example you can
add this to your
fpc.cfg file (see FPC documentation "Configuration file" to know where you can find your
#IFDEF DEBUG -gh -gl #ENDIF
Then all the programs compiled in debug mode (with
castle-engine compile --mode=debug,
or with an explicit FPC option
will automatically check for memory leaks.
The end result is that at the program's exit, you will get a very useful report about the allocated and not freed memory blocks, with a stack trace to the allocation call. This allows to easily detect and fix memory leaks.
If everything is OK, the output looks like this:
Heap dump by heaptrc unit 12161 memory blocks allocated : 2290438/2327696 12161 memory blocks freed : 2290438/2327696 0 unfreed memory blocks : 0 True heap size : 1212416 True free heap : 1212416
But when you have a memory leak, it tells you about it, and tells you where the relevant memory was allocated, like this:
Heap dump by heaptrc unit 4150 memory blocks allocated : 1114698/1119344 4099 memory blocks freed : 1105240/1109808 51 unfreed memory blocks : 9458 True heap size : 851968 True free heap : 834400 Should be : 835904 Call trace for block $00007F9B14E42980 size 44 $0000000000402A83 line 162 of xxx.lpr ...
Note: when you exit with
you will always have some memory leaks, that's unavoidable for now.
You can ignore the "Heap dump by heaptrc unit" output in this case.
Note: In the future, we may add "-gh" automatically
to the options added by the build tool in the debug mode.
So programs compiled with
castle-engine compile --mode=debug
will automatically show this output.
We do not have any engine-specific tool to measure memory usage or
detect memory problems, as there are plenty of them available with
FPC+Lazarus already. To simply see the memory usage, just use process
monitor that comes with your OS. See also Lazarus units
You can use full-blown memory profilers like valgrind's massif with FPC code (see section "Profiling" above on this page about valgrind).