Extensions introduced in Castle Game Engine related to navigation.
See also documentaton of supported nodes of the Navigation component and X3D specification of the Navigation component.
Contents:
Viewpoint.camera*Matrix
events)
Viewpoint.fieldOfViewForceVertical
)
KambiNavigationInfo.headBobbing*
fields)
KambiNavigationInfo.headlightNode
)
NavigationInfo.blendingSort
)
KambiOctreeProperties
, various fields octreeXxx
)
KambiNavigationInfo.timeOriginAtLoad
)
direction
and up
and gravityUp
for PerspectiveCamera
, OrthographicCamera
and Viewpoint
nodes
Viewpoint.camera*Matrix
events)To every viewpoint node (this applies to all viewpoints usable
in our engine, including all X3DViewpointNode
descendants,
like Viewpoint
and OrthoViewpoint
, and even to
VRML 1.0 PerspectiveCamera
and OrthographicCamera
)
we add output events that provide you with current camera matrix.
One use for such matrices is to route them to your GLSL shaders (as
uniform variables), and use inside the shaders to transform between
world and camera space.
*Viewpoint { ... all normal *Viewpoint fields ... SFMatrix4f [out] cameraMatrix SFMatrix4f [out] cameraInverseMatrix SFMatrix3f [out] cameraRotationMatrix SFMatrix3f [out] cameraRotationInverseMatrix SFBool [in,out] cameraMatrixSendAlsoOnOffscreenRendering FALSE }
"cameraMatrix"
transforms from world-space (global 3D space
that we most often think within) to camera-space (aka eye-space;
when thinking within this space, you know then that the camera
position is at (0, 0, 0), looking along -Z, with up in +Y).
It takes care of both the camera position and orientation,
so it's 4x4 matrix.
"cameraInverseMatrix"
is simply the inverse of this matrix,
so it transforms from camera-space back to world-space.
"cameraRotationMatrix"
again
transforms from world-space to camera-space, but now it only takes
care of camera rotations, disregarding camera position. As such,
it fits within a 3x3 matrix (9 floats), so it's smaller than full
cameraMatrix
(4x4, 16 floats).
"cameraRotationInverseMatrix"
is simply it's inverse.
Ideal to transform directions
between world- and camera-space in shaders.
"cameraMatrixSendAlsoOnOffscreenRendering"
controls
when the four output events above are generated.
The default (FALSE
) behavior is that they are generated only
for camera that corresponds to the actual viewpoint, that is: for the
camera settings used when rendering scene to the screen.
The value TRUE
causes the output matrix events to be generated
also for temporary camera settings used for off-screen rendering
(used when generating textures for GeneratedCubeMapTexture
,
GeneratedShadowMap
, RenderedTexture
). This is a little
dirty, as cameras used for off-screen rendering do not (usually) have
any relation to actual viewpoint (for example, for
GeneratedCubeMapTexture
, camera is positioned in the middle
of the shape using the cube map). But this can be useful: when you route
these events straight to the shaders, you usually need in shaders "actual
camera" (which is not necessarily current viewpoint camera) matrices.
These events are usually generated only by the currently bound viewpoint node.
The only exception is when you use RenderedTexture
and set something in RenderedTexture.viewpoint
:
in this case, RenderedTexture.viewpoint
will generate appropriate
events (as long as you set cameraMatrixSendAlsoOnOffscreenRendering
to TRUE
). Conceptually, RenderedTexture.viewpoint
is temporarily bound (although it doesn't send isBound/bindTime events).
Viewpoint.fieldOfViewForceVertical
)Viewpoint { SFBool [in,out] fieldOfViewForceVertical FALSE }
The standard Viewpoint.fieldOfView
by default specifies
a minimum field of view. It will either be the horizontal field of view,
or vertical field of view — depending on the current aspect ratio
(whether your window is taller or wider). Usually, this smart behavior is useful.
However, sometimes you really need to explicitly specify a vertical
field of view. In this case, you can set fieldOfViewForceVertical
to TRUE
. Now the Viewpoint.fieldOfView
is interpreted
differently: it's always a vertical field of view.
The horizontal field of view will always be adjusted to follow aspect ratio.
KambiNavigationInfo.headBobbing*
fields)"Head bobbing" is the effect of camera moving slightly up
and down when you walk on the ground (when gravity works).
This simulates our normal human vision — we can't usually keep
our head at the exact same height above the ground when walking
or running :)
By default our engine does head bobbing (remember, only when gravity
works; that is when the navigation mode is WALK
).
This is common in FPS games.
Using the extensions below you can tune (or even turn off)
the head bobbing behavior. For this we add new fields to the
KambiNavigationInfo
node (introduced in the previous section,
can be simply used instead of the standard NavigationInfo
).
KambiNavigationInfo : NavigationInfo { ... all normal NavigationInfo fields, and KambiNavigationInfo fields documented previously ... SFFloat [in,out] headBobbing 0.02 SFFloat [in,out] headBobbingTime 0.5 }
Intuitively, headBobbing
is the intensity of the whole effect
(0 = no head bobbing) and headBobbingTime
determines
the time of a one step of a walking human.
The field headBobbing
multiplied by the avatar height specifies how far
the camera can move up and down. The avatar height is taken from
the standard NavigationInfo.avatarSize
(2nd array element).
Set this to exact 0 to disable head bobbing.
This must always be < 1. For sensible effects, this should
be something rather close to 0, like 0.02.
(Developers: see also TWalkCamera.HeadBobbing property.)
The field headBobbingTime
determines how much time passes
to make full head bobbing sequence (camera swing up and then down back to original height).
(Developers: see also TWalkCamera.HeadBobbingTime property.)
KambiNavigationInfo.headlightNode
)
![]() |
![]() |
![]() |
![]() |
You can configure the appearance of headlight by the headlightNode
field of KambiNavigationInfo
node.
KambiNavigationInfo
is just a replacement of standard
NavigationInfo
, adding some extensions specific to our engine.
KambiNavigationInfo : NavigationInfo { ... all KambiNavigationInfo fields so far ... SFNode [in,out] headlightNode NULL # [X3DLightNode] }
headlightNode
defines the type and properties of the
light following the avatar ("head light"). You can put any
valid X3D light node here. If you don't give anything here (but still
request the headlight by NavigationInfo.headlight = TRUE
,
which is the default) then the default DirectionalLight
will be used for headlight.
Almost everything (with the exceptions listed below) works as usual for all the light sources. Changing colors and intensity obviously work. Changing the light type, including making it a spot light or a point light, also works.
Note that for nice spot headlights, you will usually want to
enable per-pixel lighting
on everything by View->Shaders->Enable For Everything.
Otherwise the ugliness of default fixed-function Gouraud shading
will be visible in case of spot lights (you will see how
the spot shape "crawls" on the triangles,
instead of staying in a nice circle).
So to see the spot light cone perfectly, and also to see
SpotLight.beamWidth
perfectly,
enable per-pixel shader lighting.
Note that instead of setting headlight to spot, you may also consider cheating: you can create a screen effect that simulates the headlight. See view3dscene "Screen Effects -> Headlight" for demo, and screen effects documentation for ways to create this yourself. This is an entirely different beast, more cheating but also potentially more efficient (for starters, you don't have to use per-pixel lighting on everything to make it nicely round).
Your specified "location"
of the light (if you put here PointLight
or SpotLight
) will be ignored.
Instead we will synchronize light location in each frame
to the player's location
(in world coordinates).
You can ROUTE your light's location to something, to see it changing.
Similarly, your specified "direction"
of the light
(if this is DirectionalLight
or SpotLight
)
will be ignored. Instead we will keep it synchronized
with the player's normalized direction
(in world coordinates). You can ROUTE this direction to see it changing.
The "global"
field doesn't matter.
Headlight always shines on everything, ignoring normal VRML/X3D
light scope rules.
History: We used to configure headlight by different, specialized node. This is still parsed but ignored in new versions:
KambiHeadLight : X3DChildNode { SFFloat [in,out] ambientIntensity 0 # [0.0, 1.0] SFVec3f [in,out] attenuation 1 0 0 # [0, infinity) SFColor [in,out] color 1 1 1 # [0, 1] SFFloat [in,out] intensity 1 # [0, 1] SFBool [in,out] spot FALSE SFFloat [in,out] spotDropOffRate 0 SFFloat [in,out] spotCutOffAngle π/4 }
NavigationInfo.blendingSort
)NavigationInfo { ... SFString [in,out] blendingSort DEFAULT # ["DEFAULT", "NONE", "2D", "3D"] }
Values other than "DEFAULT" force specific blending sort treatment when rendering, which is useful since some scenes sometimes have specific requirements to be rendered sensibly. See TBlendingSort.
KambiOctreeProperties
, various fields octreeXxx
)
![]() |
Like most 3D engines, Castle Game Engine uses a smart tree structure to handle collision detection in arbitrary 3D worlds. The structure used in our engine is the octree, with a couple of special twists to handle dynamic scenes. See documentation chapter "octrees" for more explanation.
There are some limits that determine how fast the octree is constructed, how much memory does it use, and how fast can it answer collision queries. While our programs have sensible and tested defaults hard-coded, it may be useful (or just interesting for programmers) to test other limits — this is what this extension is for.
In all honesty, I (Michalis) do not expect this extension to be commonly used... It allows you to tweak an important, but internal, part of the engine. For most normal people, this extension will probably look like an uncomprehensible black magic. And that's Ok, as the internal defaults used in our engine really suit (almost?) all practical uses.
If the above paragraph didn't scare you, and you want to know more about octrees in our engine: besides documentation chapter "octrees" you can also take a look at the (source code and docs) of the TCastleSceneCore.Spatial property.
A new node:
KambiOctreeProperties : X3DNode { SFInt32 [] maxDepth -1 # must be >= -1 SFInt32 [] leafCapacity -1 # must be >= -1 }
Limit -1
means to use the default value hard-coded in the program.
Other values force the generation of octree with given limit.
For educational purposes, you can make an experiment and try
maxDepth = 0: this forces a one-leaf tree, effectively
making octree searching work like a normal linear searching.
You should see a dramatic loss of game speed on non-trivial models then.
To affect the scene octrees you can place KambiOctreeProperties
node inside KambiNavigationInfo
node. For per-shape
octrees, we add new fields to Shape
node:
KambiNavigationInfo : NavigationInfo { ... all KambiNavigationInfo fields so far ... SFNode [] octreeRendering NULL # only KambiOctreeProperties node SFNode [] octreeDynamicCollisions NULL # only KambiOctreeProperties node SFNode [] octreeVisibleTriangles NULL # only KambiOctreeProperties node SFNode [] octreeStaticCollisions NULL # only KambiOctreeProperties node }
X3DShapeNode (e.g. Shape) { ... all normal X3DShapeNode fields ... SFNode [] octreeTriangles NULL # only KambiOctreeProperties node }
See the API documentation for classes TCastleSceneCore
and TShape
for precise description about what each octree is.
In normal simulation of dynamic 3D scenes,
we use only octreeRendering
, octreeDynamicCollisions
and
Shape.octreeTriangles
octrees. Ray-tracers usually use
octreeVisibleTriangles
.
We will use scene octree properties from the first bound
NavigationInfo
node (see VRML/X3D specifications
about the rules for bindable nodes). If this node is not
KambiNavigationInfo
, or appropriate octreeXxx
field
is NULL
, or appropriate field within KambiOctreeProperties
is -1
, then the default hard-coded limit will be used.
Currently, it's not perfectly specified what happens to octree limits
when you bind other [Kambi]NavigationInfo
nodes during the game.
With current implementation, this will cause the limits to change,
but they will be actually applied only when the octree will be rebuild
— which may happen never, or only at some radical rebuild of
VRML graph by other events. So if you have multiple
[Kambi]NavigationInfo
nodes in your world, I advice to
specify in all of them exactly the same octreeXxx
fields values.
KambiNavigationInfo.timeOriginAtLoad
)By default, VRML/X3D time origin is at 00:00:00 GMT January 1, 1970
and SFTime
reflects real-world time (taken from your OS).
This is somewhat broken idea in my opinion, unsuitable
for normal single-user games. So you can change this by using
KambiNavigationInfo
node:
KambiNavigationInfo : NavigationInfo { ... all normal NavigationInfo fields ... SFBool [] timeOriginAtLoad FALSE }
The default value, FALSE
, means the standard VRML behavior.
When TRUE
the time origin for this VRML scene is considered
to be 0.0 when browser loads the file. For example this means that you can
easily specify desired startTime
values for time-dependent nodes
(like MovieTexture
or TimeSensor
)
to start playing at load time, or a determined number of seconds
after loading of the scene.
direction
and up
and gravityUp
for PerspectiveCamera
, OrthographicCamera
and Viewpoint
nodesStandard VRML way of specifying camera orientation
(look direction and up vector) is to use orientation
field
that says how to rotate standard look direction vector (<0,0,-1>)
and standard up vector (<0,1,0>). While I agree that this
way of specifying camera orientation has some advantages
(e.g. we don't have the problem with the uncertainty
"Is look direction vector length meaningful ?")
I think that this is very uncomfortable for humans.
Reasoning:
orientation
field
by human, without some calculator. When you set up
your camera, you're thinking about "In what direction it looks ?"
and "Where is my head ?", i.e. you're thinking
about look and up vectors.
orientation
and look and up
vectors is trivial for computers but quite hard for humans
without a calculator (especially if real-world values are
involved, that usually don't look like "nice numbers").
Which means that when I look at source code of your VRML
camera node and I see your orientation
field
— well, I still have no idea how your camera is oriented.
I have to fire up some calculating program, or one
of programs that view VRML (like view3dscene).
This is not some terrible disadvantage, but still it matters
for me.
orientation
is written with respect to standard
look (<0,0,-1>) and up (<0,1,0>) vectors.
So if I want to imagine camera orientation in my head —
I have to remember these standard vectors.
Also, VRML 2.0 spec says that the gravity upward vector should
be taken as +Y vector transformed by whatever transformation is applied
to Viewpoint
node. This also causes similar problems,
since e.g. to have gravity upward vector in +Z you have to apply
rotation to your Viewpoint
node.
So I decided to create new fields for PerspectiveCamera
,
OrthographicCamera
and Viewpoint
nodes to allow alternative way to specify
an orientation:
PerspectiveCamera / OrthographicCamera / Viewpoint { ... all normal *Viewpoint fields ... MFVec3f [in,out] direction [] MFVec3f [in,out] up [] SFVec3f [in,out] gravityUp 0 1 0 }
If at least one vector in direction
field
is specified, then this is taken as camera look vector.
Analogous, if at least one vector in up
field
is specified, then this is taken as camera up vector.
This means that if you specify some vectors for
direction
and up
then the value of
orientation
field is ignored.
direction
and up
fields should have
either none or exactly one element.
As usual, direction
and up
vectors
can't be parallel and can't be zero.
They don't have to be orthogonal — up
vector will be
always silently corrected to be orthogonal to direction
.
Lengths of these vectors are always ignored.
As for gravity: VRML 2.0 spec says to take standard +Y vector
and transform it by whatever transformation was applied to
Viewpoint
node. So we modify this to say
take gravityUp
vector
and transform it by whatever transformation was applied to
Viewpoint
node. Since the default value for
gravityUp
vector is just +Y, so things work 100% conforming
to VRML spec if you don't specify gravityUp
field.
In view3dscene "Print current camera node" command (key shortcut Ctrl+C)
writes camera node in both versions — one that uses
orientation
field and transformations to get gravity upward vector,
and one that uses direction
and up
and gravityUp
fields.
Copyright Michalis Kamburelis and Castle Game Engine Contributors.
This webpage is also open-source and we welcome pull requests to improve it.
We use cookies for analytics. See our privacy policy.