OGRE 14.3
Object-Oriented Graphics Rendering Engine
|
Material scripts offer you the ability to define complex materials in a script which can be reused easily. Whilst you could set up all materials for a scene in code using the methods of the Material and TextureLayer classes, in practice it's a bit unwieldy. Instead you can store material definitions in text files which can then be loaded whenever required.
It’s important to realise that materials are not loaded completely by the parsing process: only the definition is loaded, no textures or other resources are loaded. This is because it is common to have a large library of materials, but only use a relatively small subset of them in any one scene. To load every material completely in every script would therefore cause unnecessary memory overhead. You can access a ’deferred load’ Material in the normal way (Ogre::MaterialManager::getSingleton().getByName()), but you must call the ’load’ method before trying to use it. Ogre does this for you when using the normal material assignment methods of entities etc.
To start with, we only consider fixed-function materials which don’t use vertex, geometry or fragment programs, these are covered later:
A material can be made up of many Techniques - a technique is one way of achieving the effect you are looking for. You can supply more than one technique in order to provide fallback approaches where a card does not have the ability to render the preferred technique, or where you wish to define lower level of detail versions of the material in order to conserve rendering power when objects are more distant.
Each technique can be made up of many Passes, that is a complete render of the object can be performed multiple times with different settings in order to produce composite effects. Ogre may also split the passes you have defined into many passes at runtime, if you define a pass which uses too many texture units for the card you are currently running on (note that it can only do this if you are not using a fragment program). Each pass has a number of top-level attributes such as ’ambient’ to set the amount & colour of the ambient light reflected by the material. Some of these options do not apply if you are using vertex programs, See Passes for more details.
Within each pass, there can be zero or many Texture Units in use. These define the texture to be used, and optionally some blending operations (which use multitexturing) and texture effects.
You can also reference vertex and fragment programs (or vertex and pixel shaders, if you want to use that terminology) in a pass with a given set of parameters. Programs themselves are declared in separate .program scripts (See GPU Program Scripts) and are used as described in Using GPU Programs in a Pass.
The outermost section of a material definition does not have a lot of attributes of its own (most of the configurable parameters are within the child sections. However, it does have some, and here they are:
Sets the name of the LOD strategy to be used.
distance_sphere
which means LOD changes based on distance from the camera (calculated via the bounding sphere radius).distance_box
behaves the same as 'distance_sphere' except that it uses the object’s bounding box to approximate the distance.pixel_count
changes LOD levels based on an absolute estimate of the screen-space pixels occupied (internally approximated via the bounding radius).screen_ratio_pixel_count
sets that absolute screen space value in relation to the screen size (1.0 = object covering complete screen, 0.5 = half screen covered by object, etc.).This attribute defines the values used to control the LOD transition for this material. By setting this attribute, you indicate that you want this material to alter the Technique that it uses based on some metric, such as the distance from the camera, or the approximate screen space coverage. The exact meaning of these values is determined by the option you select for lod_strategy - it is a list of distances for the distance_sphere
strategy, and a list of pixel counts for the pixel_count
strategy, for example. You must give it a list of values, in order from highest LOD value to lowest LOD value, each one indicating the point at which the material will switch to the next LOD. All materials automatically activate LOD index 0 for values less than the first entry, so you don't have to explicitly specify this. Additionally, if there is no technique that matches the active LOD index, a technique with a lower LOD index will be used instead. Therefore, it is important to always have at least one technique with LOD index 0.
The above example would cause the material to use the best Technique at lod_index 0 up to a distance of 300 world units, the best from lod_index 1 from 300 up to 600, lod_index 2 from 600 to 1200, and lod_index 3 from 1200 upwards.
This attribute controls whether objects using this material can have shadows cast upon them.
Whether or not an object receives a shadow is the combination of a number of factors, See Shadows for full details; however this allows you to make a material opt-out of receiving shadows if required. Note that transparent materials never receive shadows so this option only has an effect on solid materials.
This attribute controls whether transparent materials can cast certain kinds of shadow.
Whether or not an object casts a shadow is the combination of a number of factors, See Shadows for full details; however this allows you to make a transparent material cast shadows, when it would otherwise not. For example, when using texture shadows, transparent materials are normally not rendered into the shadow texture because they should not block light. This flag overrides that.
This attribute associates a texture alias with a texture name.
A "technique" section in your material script encapsulates a single method of rendering an object. The simplest of material definitions only contains a single technique, however since PC hardware varies quite greatly in its capabilities, you can only do this if you are sure that every card for which you intend to target your application will support the capabilities which your technique requires. In addition, it can be useful to define simpler ways to render a material if you wish to use material LOD, such that more distant objects use a simpler, less performance-hungry technique.
When a material is used for the first time, it is ’compiled’. That involves scanning the techniques which have been defined, and marking which of them are supportable using the current rendering API and graphics card. If no techniques are supportable, your material will render as blank white. The compilation examines a number of things, such as:
In a material script, techniques must be listed in order of preference, i.e. the earlier techniques are preferred over the later techniques. This normally means you will list your most advanced, most demanding techniques first in the script, and list fallbacks afterwards.
To help clearly identify what each technique is used for, the technique can be named but its optional. Techniques not named within the script will take on a name that is the technique index number. For example: the first technique in a material is index 0, its name would be "0" if it was not given a name in the script. The technique name must be unique within the material or else the final technique is the resulting merge of all techniques with the same name in the material. A warning message is posted in the Ogre.log if this occurs. Named techniques can help when inheriting a material and modifying an existing technique: (See Script Inheritance)
Techniques have only a small number of attributes of their own:
Sets the ’scheme’ this Technique belongs to. Material schemes are used to control top-level switching from one set of techniques to another. For example, you might use this to define ’high’, ’medium’ and ’low’ complexity levels on materials to allow a user to pick a performance / quality ratio. Another possibility is that you have a fully HDR-enabled pipeline for top machines, rendering all objects using unclamped shaders, and a simpler pipeline for others; this can be implemented using schemes. The active scheme is typically controlled at a viewport level, and the active one defaults to ’Default’.
Sets the level-of-detail (LOD) index this Technique belongs to.
All techniques are automatically assigned to a LOD index, with the default being index 0, which corresponds to the highest LOD. Increasing indexes denote lower levels of detail. You can (and often will) assign more than one technique to the same LOD index, what this means is that OGRE will pick the best technique of the ones listed at the same LOD index. For readability, it is advised that you list your techniques in order of LOD, then in order of preference, although the latter is the only prerequisite (OGRE determines which one is ’best’ by which one is listed first). You must always have at least one Technique at lod_index 0. The distance at which a LOD level is applied is determined by the lod_values attribute of the containing material.
Techniques also contain one or more Passes (and there must be at least one).
When using Texture-based Shadows you can specify an alternate material to use when rendering the object using this material into the shadow texture. This is like a more advanced version of using shadow_caster_vertex_program
, however note that for the moment you are expected to render the shadow in one pass, i.e. only the first pass is respected.
When using Texture-based Shadows you can specify an alternate material to use when performing the receiver shadow pass. This is like a more advanced version of using shadow_receiver_vertex_program
and shadow_receiver_fragment_program
, however note that for the moment you are expected to render the shadow in one pass, i.e. only the first pass is respected.
Although Ogre does a good job of detecting the capabilities of graphics cards and setting the supportability of techniques from that, occasionally card-specific behaviour exists which is not necessarily detectable and you may want to ensure that your materials go down a particular path to either use or avoid that behaviour. This is what these rules are for - you can specify matching rules so that a technique will be considered supportable only on cards from a particular vendor, or which match a device name pattern, or will be considered supported only if they don’t fulfil such matches. The format of the rules are as follows:
include | the technique will only be supported if one of the include rules is matched (if no include rules are provided, anything will pass). |
exclude | the technique is considered unsupported if any of the exclude rules are matched. |
vendor_name | values are as returned by Ogre::RenderSystemCapabilities::vendorToString |
device_pattern | can be any string, and you can use wildcards (’*’) if you need to match variants. |
You can provide as many rules as you like, although <vendor_name> and <device_pattern> must obviously be unique.
Here’s an example:
These rules, if all included in one technique, will mean that the technique will only be considered supported on graphics cards made by NVIDIA and Intel, and so long as the device name doesn’t have ’950’ in it.
A pass is a single render of the geometry in question; a single call to the rendering API with a certain set of rendering properties. A technique can have between one and 16 passes, although clearly the more passes you use, the more expensive the technique will be to render.
To help clearly identify what each pass is used for, the pass can be named but its optional. Passes not named within the script will take on a name that is the pass index number. For example: the first pass in a technique is index 0 so its name would be "0" if it was not given a name in the script. The pass name must be unique within the technique or else the final pass is the resulting merge of all passes with the same name in the technique. A warning message is posted in the Ogre.log if this occurs. Named passes can help when inheriting a material and modifying an existing pass: (See Script Inheritance)
Passes have a set of global attributes (described below) and optionally
Here are the attributes you can use in a ’pass’ section of a .material script:
Sets the ambient colour reflectance properties of this pass.
This property determines how much ambient light (directionless global light) is reflected. The default is full white, meaning objects are completely globally illuminated. Reduce this if you want to see diffuse or specular light effects, or change the blend of colours to make the object have a base colour other than white.
It is also possible to make the ambient reflectance track the vertex colour as defined in the mesh instead of the colour values.
Sets the diffuse colour reflectance properties of this pass.
This property determines how much diffuse light (light from instances of the Light class in the scene) is reflected. The default is full white, meaning objects reflect the maximum white light they can from Light objects.
It is also possible to make the diffuse reflectance track the vertex colour as defined in the mesh instead of the colour values.
Sets the specular colour reflectance properties of this pass.
This property determines how much specular light (highlights from instances of the Light class in the scene) is reflected. The default is to reflect no specular light. The colour of the specular highlights is determined by the colour parameters, and the size of the highlights by the separate shininess parameter. It is also possible to make the specular reflectance track the vertex colour as defined in the mesh instead of the colour values.
The higher the value of the shininess parameter, the sharper the highlight i.e. the radius is smaller. Beware of using shininess values in the range of 0 to 1 since this causes the the specular colour to be applied to the whole surface that has the material applied to it. When the viewing angle to the surface changes, ugly flickering will also occur when shininess is in the range of 0 to 1. Shininess values between 1 and 128 work best in both DirectX and OpenGL renderers.
Sets the amount of self-illumination an object has.
Unlike the name suggests, this object doesn’t act as a light source for other objects in the scene (if you want it to, you have to create a light which is centered on the object).
If an object is self-illuminating, it does not need external sources to light it, ambient or otherwise. It's like the object has it's own personal ambient light. This property is rarely useful since you can already specify per-pass ambient light, but is here for completeness.
It is also possible to make the emissive reflectance track the vertex colour as defined in the mesh instead of the colour values.
Sets the kind of blending this pass has with the existing contents of the scene.
Whereas the texture blending operations seen in the texture_unit entries are concerned with blending between texture layers, this blending is about combining the output of this pass as a whole with the existing contents of the rendering target. This blending therefore allows object transparency and other special effects.
There are 2 formats, one using predefined blend types, the other allowing a roll-your-own approach using source and destination factors.
This is the simpler form, where the most commonly used blending modes are enumerated using a single parameter.
blend_type |
|
This version of the method allows complete control over the blending operation, by specifying the source and destination blending factors.
By default the operation is Ogre::SBO_ADD, which creates this equation
$$final = (passOutput * sourceFactor) + (frameBuffer * destFactor)$$
Each of the factors is specified as one of Ogre::SceneBlendFactor.
By setting a different Ogre::SceneBlendOperation you can achieve a different effect.
sourceFactor | The source factor in the above calculation, i.e. multiplied by the output of the Pass. |
destFactor | The destination factor in the above calculation, i.e. multiplied by the Frame Buffer contents. |
Valid values for both parameters are one of Ogre::SceneBlendFactor without the SBF_
prefix. E.g. SBF_DEST_COLOUR
becomes dest_colour
.
Also see separate_scene_blend.
This option operates in exactly the same way as scene_blend, except that it allows you to specify the operations to perform between the rendered pixel and the frame buffer separately for colour and alpha components. By nature this option is only useful when rendering to targets which have an alpha channel which you’ll use for later processing, such as a render texture.
This example would add colour components but multiply alpha components. The blend modes available are as in scene_blend. The more advanced form is also available:
Again the options available in the second format are the same as those in the second format of scene_blend.
This directive changes the operation which is applied between the two components of the scene blending equation
op | The blending operation mode to use for this pass You may change this to ’add’, ’subtract’, ’reverse_subtract’, ’min’ or ’max’. |
This directive is as scene_blend_op, except that you can set the operation for colour and alpha separately.
Sets whether or not this pass renders with depth-buffer checking on or not.
If depth-buffer checking is on, whenever a pixel is about to be written to the frame buffer the depth buffer is checked to see if the pixel is in front of all other pixels written at that point. If not, the pixel is not written.
If depth checking is off, pixels are written no matter what has been rendered before. Also see setDepthFunction for more advanced depth check configuration.
Default: depth_check on
Sets whether or not this pass renders with depth-buffer writing on or not.
If depth-buffer writing is on, whenever a pixel is written to the frame buffer the depth buffer is updated with the depth value of that new pixel, thus affecting future rendering operations if future pixels are behind this one.
If depth writing is off, pixels are written without updating the depth buffer Depth writing should normally be on but can be turned off when rendering static backgrounds or when rendering a collection of transparent objects at the end of a scene so that they overlap each other correctly.
Sets the function used to compare depth values when depth checking is on.
If depth checking is enabled (see setDepthCheckEnabled) a comparison occurs between the depth value of the pixel to be written and the current contents of the buffer. This comparison is normally Ogre::CMPF_LESS_EQUAL.
func | one of Ogre::CompareFunction without the CMPF_ prefix. E.g. CMPF_LESS_EQUAL becomes less_equal . |
Sets the bias applied to the depth value of this pass.
When polygons are coplanar, you can get problems with 'depth fighting' where the pixels from the two polys compete for the same screen pixel. This is particularly a problem for decals (polys attached to another surface to represent details such as bulletholes etc.).
A way to combat this problem is to use a depth bias to adjust the depth buffer value used for the decal such that it is slightly higher than the true value, ensuring that the decal appears on top. There are two aspects to the biasing, a constant bias value and a slope-relative biasing value, which varies according to the maximum depth slope relative to the camera, ie:
$$finalBias = maxSlope * slopeScaleBias + constantBias$$
Slope scale biasing is relative to the angle of the polygon to the camera, which makes for a more appropriate bias value, but this is ignored on some older hardware. Constant biasing is expressed as a factor of the minimum depth value, so a value of 1 will nudge the depth by one ’notch’ if you will.
constantBias | The constant bias value |
slopeScaleBias | The slope-relative bias value |
Also see iteration_depth_bias
Sets an additional bias derived from the number of times a given pass has been iterated. Operates just like depth_bias except that it applies an additional bias factor to the base depth_bias value, multiplying the provided value by the number of times this pass has been iterated before, through one of the iteration variants. So the first time the pass will get the depth_bias value, the second time it will get depth_bias + iteration_depth_bias, the third time it will get depth_bias + iteration_depth_bias * 2, and so on. The default is zero.
Sets the way the pass will have use alpha to totally reject pixels from the pipeline.
The function parameter can be any of the options listed in the material depth_function attribute. The value parameter can theoretically be any value between 0 and 255, but is best limited to 0 or 128 for hardware compatibility.
Sets whether this pass will use ’alpha to coverage’,
Alpha to coverage performs multisampling on the edges of alpha-rejected textures to produce a smoother result. It is only supported when multisampling is already enabled on the render target, and when the hardware supports alpha to coverage (see RenderSystemCapabilities). The common use for alpha to coverage is foliage rendering and chain-link fence style textures.
Sets whether when rendering this pass, rendering will be limited to a screen-space scissor rectangle representing the coverage of the light(s) being used in this pass.
In order to cut down on fillrate when you have a number of fixed-range lights in the scene, you can enable this option to request that during rendering, only the region of the screen which is covered by the lights is rendered. This region is the screen-space rectangle covering the union of the spheres making up the light ranges. Directional lights are ignored for this.
This is only likely to be useful for multipass additive lighting algorithms, where the scene has already been 'seeded' with an ambient pass and this pass is just adding light in affected areas.
When using Ogre::SHADOWTYPE_STENCIL_ADDITIVE or Ogre::SHADOWTYPE_TEXTURE_ADDITIVE, this option is implicitly used for all per-light passes and does not need to be specified. If you are not using shadows or are using a modulative or Integrated Texture Shadows then this could be useful.
Sets whether when rendering this pass, triangle setup will be limited to clipping volume covered by the light.
This option will only function if there is a single non-directional light being used in this pass. If there is more than one light, or only directional lights, then no clipping will occur. If there are no lights at all then the objects won’t be rendered at all.
In order to cut down on the geometry set up to render this pass when you have a single fixed-range light being rendered through it, you can enable this option to request that during triangle setup, clip planes are defined to bound the range of the light. In the case of a point light these planes form a cube, and in the case of a spotlight they form a pyramid. Directional lights are never clipped.
This option is only likely to be useful for multipass additive lighting algorithms, where the scene has already been 'seeded' with an ambient pass and this pass is just adding light in affected areas. In addition, it will only be honoured if there is exactly one non-directional light being used in this pass. Also, these clip planes override any user clip planes set on Camera.
When using Ogre::SHADOWTYPE_STENCIL_ADDITIVE or Ogre::SHADOWTYPE_TEXTURE_ADDITIVE, this option is automatically used for all per-light passes if you enable Ogre::SceneManager::setShadowUseLightClipPlanes and does not need to be specified. It is disabled by default since clip planes have a cost of their own which may not always exceed the benefits they give you. Generally the smaller your lights are the more chance you’ll see a benefit rather than a penalty from clipping.
When using an additive lighting mode (Ogre::SHADOWTYPE_STENCIL_ADDITIVE or Ogre::SHADOWTYPE_TEXTURE_ADDITIVE), the scene is rendered in 3 discrete stages, ambient (or pre-lighting), per-light (once per light, with shadowing) and decal (or post-lighting). Usually OGRE figures out how to categorise your passes automatically, but there are some effects you cannot achieve without manually controlling the illumination. For example specular effects are muted by the typical sequence because all textures are saved until the Ogre::IS_DECAL stage which mutes the specular effect. Instead, you could do texturing within the per-light stage if it's possible for your material and thus add the specular on after the decal texturing, and have no post-light rendering.
If you assign an illumination stage to a pass you have to assign it to all passes in the technique otherwise it will be ignored. Also note that whilst you can have more than one pass in each group, they cannot alternate, ie all ambient passes will be before all per-light passes, which will also be before all decal passes. Within their categories the passes will retain their ordering though.
Sets if transparent textures should be sorted by depth or not.
By default all transparent materials are sorted such that renderables furthest away from the camera are rendered first. This is usually the desired behaviour but in certain cases this depth sorting may be unnecessary and undesirable. If for example it is necessary to ensure the rendering order does not change from one frame to the next. In this case you could set the value to ’off’ to prevent sorting.
You can also use the keyword ’force’ to force transparent sorting on, regardless of other circumstances. Usually sorting is only used when the pass is also transparent, and has a depth write or read which indicates it cannot reliably render without sorting. By using ’force’, you tell OGRE to sort this pass no matter what other circumstances are present.
Sets the hardware culling mode for this pass.
A typical way for the rendering engine to cull triangles is based on the 'vertex winding' of triangles. Vertex winding refers to the direction in which the vertices are passed or indexed to in the rendering operation as viewed from the camera, and will either be clockwise or anticlockwise (that's 'counterclockwise' for you Americans out there ;) The default is Ogre::CULL_CLOCKWISE i.e. that only triangles whose vertices are passed/indexed in anticlockwise order are rendered - this is a common approach and is used in 3D studio models for example. You can alter this culling mode if you wish but it is not advised unless you know what you are doing.
You may wish to use the Ogre::CULL_NONE option for mesh data that you cull yourself where the vertex winding is uncertain or for creating 2-sided passes.
Sets the software culling mode for this pass.
In some situations you want to use manual culling of triangles rather than sending the triangles to the hardware and letting it cull them. This setting only takes effect on SceneManager's that use it (since it is best used on large groups of planar world geometry rather than on movable geometry since this would be expensive), but if used can cull geometry before it is sent to the hardware.
In this case the culling is based on whether the ’back’ or ’front’ of the triangle is facing the camera - this definition is based on the face normal (a vector which sticks out of the front side of the polygon perpendicular to the face). Since Ogre expects face normals to be on anticlockwise side of the face, Ogre::MANUAL_CULL_BACK is the software equivalent of Ogre::CULL_CLOCKWISE setting, which is why they are both the default. The naming is different to reflect the way the culling is done though, since most of the time face normals are pre-calculated and they don’t have to be the way Ogre expects - you could set Ogre::CULL_NONE and completely cull in software based on your own face normals, if you have the right SceneManager which uses them.
Sets whether or not dynamic lighting is turned on for this pass or not.
Turning dynamic lighting off makes any ambient, diffuse, specular, emissive and shading properties for this pass redundant. If lighting is turned off, all objects rendered using the pass will be fully lit. When lighting is turned on, objects are lit according to their vertex normals for diffuse and specular light, and globally for ambient and emissive.
Sets the kind of shading which should be used for representing dynamic lighting for this pass.
When dynamic lighting is turned on, the effect is to generate colour values at each vertex. Whether these values are interpolated across the face (and how) depends on this setting. The default shading method is Ogre::SO_GOURAUD.
mode | one of Ogre::ShadeOptions without the SO_ prefix. E.g. SO_FLAT becomes flat . |
Sets how polygons should be rasterised, i.e. whether they should be filled in, or just drawn as lines or points. The default shading method is Ogre::PM_SOLID.
mode | one of Ogre::PolygonMode without the PM_ prefix. E.g. PM_SOLID becomes solid . |
Sets whether or not the polygon_mode set on this pass can be downgraded by the camera
override | If set to false, this pass will always be rendered at its own chosen polygon mode no matter what the camera says. The default is true. |
Tells the pass whether it should override the scene fog settings, and enforce its own. Very useful for things that you don’t want to be affected by fog when the rest of the scene is fogged, or vice versa.
If you specify ’true’ for the first parameter and you supply the rest of the parameters, you are telling the pass to use these fog settings in preference to the scene settings, whatever they might be. If you specify ’true’ but provide no further parameters, you are telling this pass to never use fogging no matter what the scene says.
type | none = No fog, equivalent of just using ’fog_override true’ linear = Linear fog from the <start> and <end> distances exp = Fog increases exponentially from the camera (fog = 1/e^(distance * density)), use <density> param to control it exp2 = Fog increases at the square of FOG_EXP, i.e. even quicker (fog = 1/e^(distance * density)^2), use <density> param to control it |
colour | Sequence of 3 floating point values from 0 to 1 indicating the red, green and blue intensities |
density | The density parameter used in the ’exp’ or ’exp2’ fog types. Not used in linear mode but param must still be there as a placeholder |
start | The start distance from the camera of linear fog. Must still be present in other modes, even though it is not used. |
end | The end distance from the camera of linear fog. Must still be present in other modes, even though it is not used. |
Sets whether this pass renders with colour writing on or not. Alternatively, it can also be used to enable/disable colour writing specific channels. In the second format, the parameters are in the red, green, blue, alpha order.
If colour writing is off no visible pixels are written to the screen during this pass. You might think this is useless, but if you render with colour writing off, and with very minimal other settings, you can use this pass to initialise the depth buffer before subsequently rendering other passes which fill in the colour data. This can give you significant performance boosts on some newer cards, especially when using complex fragment programs, because if the depth check fails then the fragment program is never run.
Sets the first light which will be considered for use with this pass.
Normally the lights passed to a pass will start from the beginning of the light list for this object. This option allows you to make this pass start from a higher light index, for example if one of your earlier passes could deal with lights 0-3, and this pass dealt with lights 4+. This option also has an interaction with pass iteration, in that if you choose to iterate this pass per light too, the iteration will only begin from light 4.
Sets the maximum number of lights which will be considered for use with this pass.
The maximum number of lights which can be used when rendering fixed-function materials is set by the rendering system, and is typically set at 8. When you are using the programmable pipeline (See Using Vertex/Geometry/Fragment Programs in a Pass) this limit is dependent on the program you are running, or, if you use ’iteration once_per_light’ or a variant (See iteration), it effectively only bounded by the number of passes you are willing to use. If you are not using pass iteration, the light limit applies once for this pass. If you are using pass iteration, the light limit applies across all iterations of this pass - for example if you have 12 lights in range with an ’iteration once_per_light’ setup but your max_lights is set to 4 for that pass, the pass will only iterate 4 times.
Sets whether or not this pass is iterated, i.e. issued more than once.
The pass is only executed once which is the default behaviour.
The pass is executed once for each point light.
The render state for the pass will be setup and then the draw call will execute 5 times.
The render state for the pass will be setup and then the draw call will execute 5 times. This will be done for each point light.
The render state for the pass will be setup and the draw call executed once for every 2 lights.
By default, passes are only issued once. However, if you use the programmable pipeline, or you wish to exceed the normal limits on the number of lights which are supported, you might want to use the once_per_light option. In this case, only light index 0 is ever used, and the pass is issued multiple times, each time with a different light in light index 0. Clearly this will make the pass more expensive, but it may be the only way to achieve certain effects such as per-pixel lighting effects which take into account 1..n lights.
Using a number instead of "once" instructs the pass to iterate more than once after the render state is setup. The render state is not changed after the initial setup so repeated draw calls are very fast and ideal for passes using programmable shaders that must iterate more than once with the same render state i.e. shaders that do fur, motion blur, special filtering.
If you use once_per_light, you should also add an ambient pass to the technique before this pass, otherwise when no lights are in range of this object it will not get rendered at all; this is important even when you have no ambient light in the scene, because you would still want the objects silhouette to appear.
The lightType parameter to the attribute only applies if you use once_per_light, per_light, or per_n_lights and restricts the pass to being run for lights of a single type (either ’point’, ’directional’ or ’spot’). In the example, the pass will be run once per point light. This can be useful because when you’re writing a vertex / fragment program it is a lot easier if you can assume the kind of lights you’ll be dealing with. However at least point and directional lights can be dealt with in one way. Default: iteration once
Example: Simple Fur shader material script that uses a second pass with 10 iterations to grow the fur:
pass_number
and pass_iteration_number
to tell the vertex, geometry or fragment program the pass number and iteration number.This setting allows you to change the size of points when rendering a point list, or a list of point sprites. The interpretation of this command depends on the Ogre::Pass::setPointAttenuation option - if it is off (the default), the point size is in screen pixels, if it is on, it expressed as normalised screen coordinates (1.0 is the height of the screen) when the point is at the origin.
This setting specifies whether or not hardware point sprite rendering is enabled for this pass. Enabling it means that a point list is rendered as a list of quads rather than a list of dots. It is very useful to use this option if you are using a BillboardSet and only need to use point oriented billboards which are all of the same size. You can also use it for any other point list render.
Defines whether point size is attenuated with view space distance, and in what fashion.
When performing point rendering or point sprite rendering, point size can be attenuated with distance. The equation for doing this is:
\[ S_a = V_h \cdot S \cdot \frac{1}{\sqrt{constant + linear \cdot d + quadratic \cdot d^2}} \]
Where
For example, to disable distance attenuation (constant screensize) you would set constant to 1, and linear and quadratic to 0. A standard perspective attenuation would be 0, 1, 0 respectively.
The resulting size is clamped to the minimum and maximum point size.
enabled | Whether point attenuation is enabled |
constant,linear,quadratic | Parameters to the attenuation function defined above |
Sets the minimum point size after attenuation (point_size_attenuation). For details on the size metrics, See point_size.
Sets the maximum point size after attenuation (point_size_attenuation). For details on the size metrics, See point_size. A value of 0 means the maximum is set to the same as the max size reported by the current card.
This property determines what width is used to render lines.
Here are the attributes you can use in a texture_unit
section of a .material script:
You can also use nested section in order to use a special add-ins
texture_source
as a source of texture data, see External Texture Sources for detailsrtshader_system
for additional layer blending options, see Runtime Shader Generation for details.Sets the alias name for this texture unit.
Sets the name of the static texture image this layer will use.
This setting is mutually exclusive with the anim_texture attribute. Note that the texture file cannot include spaces. Those of you Windows users who like spaces in filenames, please get over it and use underscores instead.
type | specify a the type of texture to create - the default is ’2d’, but you can override this; here’s the full list:
|
numMipMaps | specify the number of mipmaps to generate for this texture. The default is ’unlimited’ which means mips down to 1x1 size are generated. You can specify a fixed number (even 0) if you like instead. Note that if you use the same texture in many material scripts, the number of mipmaps generated will conform to the number specified in the first texture_unit used to load the texture - so be consistent with your usage. |
PixelFormat | specify the desired pixel format of the texture to create, which may be different to the pixel format of the texture file being loaded. Bear in mind that the final pixel format will be constrained by hardware capabilities so you may not get exactly what you ask for. Names defined in Ogre::PixelFormat are valid values. |
gamma | informs the renderer that you want the graphics hardware to perform gamma correction on the texture values as they are sampled for rendering. This is only applicable for textures which have 8-bit colour channels (e.g.PF_R8G8B8). Often, 8-bit per channel textures will be stored in gamma space in order to increase the precision of the darker colours but this can throw out blending and filtering calculations since they assume linear space colour values. For the best quality shading, you may want to enable gamma correction so that the hardware converts the texture values to linear space for you automatically when sampling the texture, then the calculations in the pipeline can be done in a reliable linear colour space. When rendering to a final 8-bit per channel display, you’ll also want to convert back to gamma space which can be done in your shader (by raising to the power 1/2.2) or you can enable gamma correction on the texture being rendered to or the render window. Note that the ’gamma’ option on textures is applied on loading the texture so must be specified consistently if you use this texture in multiple places. |
Sets the images to be used in an animated texture layer. There are 2 formats, one for implicitly determined image names, one for explicitly named images.
Animated textures are just a series of images making up the frames of the animation. All the images must be the same size, and their names must have a frame number appended before the extension, e.g. if you specify a name of "flame.jpg" with 3 frames, the image names must be "flame_0.jpg", "flame_1.jpg" and "flame_2.jpg".
You can change the active frame on a texture layer by calling the Ogre::TextureUnitState::setCurrentFrame method.
name | The base name of the textures to use e.g. flame.jpg for frames flame_0.jpg, flame_1.jpg etc. |
numFrames | The number of frames in the sequence. |
duration | The length of time it takes to display the whole animation sequence, in seconds. If 0, no automatic transition occurs. |
This sets up the same duration animation but from 5 separately named image files. The first format is more concise, but the second is provided if you cannot make your images conform to the naming standard required for it.
Sets the images used in a cubic texture, i.e. one made up of 6 individual images making up the faces of a cube or 1 cube texture if supported by the texture format(DDS for example) and rendersystem.. These kinds of textures are used for reflection maps (if hardware supports cubic reflection maps) or skyboxes. There are 2 formats, a brief format expecting image names of a particular format and a more flexible but longer format for arbitrarily named textures.
texture <basename> cubic
' insteadIn this case each face is specified explicitly, in case you don’t want to conform to the image naming standards above.
In both cases the final parameter means the following:
Tells this texture unit where it should get its content from. The default is to get texture content from a named texture, as defined with the texture, cubic_texture, anim_texture attributes. However you can also pull texture information from other automated sources.
type |
|
Only valid when content type is compositor.
compositorName | The name of the compositor to reference. |
textureName | The name of the texture to reference. |
mrtIndex | The index of the wanted texture, if referencing an MRT. |
Sets which texture coordinate set is to be used for this texture layer.
A mesh can define multiple sets of texture coordinates, this sets which one this material uses.
Determines how the colour of this texture layer is combined with the one below it (or the lighting effect on the geometry if this is the first layer).
This method is the simplest way to blend texture layers, because it requires only one parameter, gives you the most common blending types, and automatically sets up 2 blending methods: one for if single-pass multitexturing hardware is available, and another for if it is not and the blending must be achieved through multiple rendering passes. It is, however, quite limited and does not expose the more flexible multitexturing operations, simply because these can't be automatically supported in multipass fallback mode. If want to use the fancier options, use Ogre::TextureUnitState::setColourOperationEx, but you'll either have to be sure that enough multitexturing units will be available, or you should explicitly set a fallback using Ogre::TextureUnitState::setColourOpMultipassFallback.
op | One of the Ogre::LayerBlendOperation enumerated blending types. Without the `LBO_` prefix. E.g. `LBO_REPLACE` becomes `replace`. |
This is an extended version of the Ogre::TextureUnitState::setColourOperation method which allows extremely detailed control over the blending applied between this and earlier layers. See the Warning below about the issues between mulitpass and multitexturing that using this method can create.
Texture colour operations determine how the final colour of the surface appears when rendered. Texture units are used to combine colour values from various sources (ie. the diffuse colour of the surface from lighting calculations, combined with the colour of the texture). This method allows you to specify the 'operation' to be used, ie. the calculation such as adds or multiplies, and which values to use as arguments, such as a fixed value or a value from a previous calculation.
The defaults for each layer are:
ie. each layer takes the colour results of the previous layer, and multiplies them with the new texture being applied. Bear in mind that colours are RGB values from 0.0 - 1.0 so multiplying them together will result in values in the same range, 'tinted' by the multiply. Note however that a straight multiply normally has the effect of darkening the textures - for this reason there are brightening operations like Ogre::LBX_MODULATE_X2. See the Ogre::LayerBlendOperation and Ogre::LayerBlendSource enumerated types for full details.
The final 3 parameters are only required if you decide to pass values manually into the operation, i.e. you want one or more of the inputs to the colour calculation to come from a fixed value that you supply. Hence you only need to fill these in if you supply Ogre::LBS_MANUAL to the corresponding source, or use the Ogre::LBX_BLEND_MANUAL operation.
op | The operation to be used, e.g. modulate (multiply), add, subtract. |
source1 | The source of the first colour to the operation e.g. texture colour. |
source2 | The source of the second colour to the operation e.g. current surface colour. |
arg1 | Manually supplied colour value (only required if source1 = LBS_MANUAL). |
arg2 | Manually supplied colour value (only required if source2 = LBS_MANUAL). |
manualBlend | Manually supplied 'blend' value - only required for operations which require manual blend e.g. LBX_BLEND_MANUAL. |
Each parameter can be one of Ogre::LayerBlendOperationEx or Ogre::LayerBlendSource without the prefix. E.g. LBX_MODULATE_X4
becomes modulate_x4
.
Sets the multipass fallback operation for this layer, if you used colour_op_ex and not enough multitexturing hardware is available.
Because some effects exposed using Ogre::TextureUnitState::setColourOperationEx are only supported under multitexturing hardware, if the hardware is lacking the system must fallback on multipass rendering, which unfortunately doesn't support as many effects. This method is for you to specify the fallback operation which most suits you.
You'll notice that the interface is the same as the Ogre::TMaterial::setSceneBlending method; this is because multipass rendering IS effectively scene blending, since each layer is rendered on top of the last using the same mechanism as making an object transparent, it's just being rendered in the same place repeatedly to get the multitexture effect.
If you use the simpler (and hence less flexible) Ogre::TextureUnitState::setColourOperation method you don't need to call this as the system sets up the fallback for you.
This works in exactly the same way as setColourOperationEx, except that the effect is applied to the level of alpha (i.e. transparency) of the texture rather than its colour. When the alpha of a texel (a pixel on a texture) is 1.0, it is opaque, whereas it is fully transparent if the alpha is 0.0. Please refer to the Ogre::TextureUnitState::setColourOperationEx method for more info.
op | The operation to be used, e.g. modulate (multiply), add, subtract |
source1 | The source of the first alpha value to the operation e.g. texture alpha |
source2 | The source of the second alpha value to the operation e.g. current surface alpha |
arg1 | Manually supplied alpha value (only required if source1 = Ogre::LBS_MANUAL) |
arg2 | Manually supplied alpha value (only required if source2 = Ogre::LBS_MANUAL) |
manualBlend | Manually supplied 'blend' value - only required for operations which require manual blend e.g. Ogre::LBX_BLEND_MANUAL |
Turns on/off texture coordinate generation for addressing an environment map.
Environment maps make an object look reflective by using automatic texture coordinate generation depending on the relationship between the objects vertices or normals and the eye.
2D texture coordinates using spherical reflection mapping based on vertex normals. Requires a single texture which is either a fish-eye lens view of the reflected scene, or some other texture which looks good as a spherical map (a texture of glossy highlights is popular especially in car sims). This effect is based on the relationship between the eye direction and the vertex normals of the object, so works best when there are a lot of gradually changing normals, i.e. curved objects.
The effect is based on the position of the vertices in the viewport rather than vertex normals. This is useful for planar geometry (where a spherical env_map would not look good because the normals are all the same) or objects without normals.
spherical
on all backends.3D texture coordinates using the reflection vector. Uses a group of 6 textures making up the inside of a cube, each of which is a view if the scene down each axis. Works extremely well in all cases but has a higher technical requirement from the card than spherical mapping. Requires that you bind a cubic texture to this unit.
3D texture coordinates using the normal vector. Generates 3D texture coordinates containing the camera space normal vector from the normal information held in the vertex data. Again, use of this feature requires a cubic texture.
Sets the translation offset of the texture, ie scrolls the texture.
This method sets the translation element of the texture transformation, and is easier to use than setTextureTransform if you are combining translation, scaling and rotation in your texture transformation. If you want to animate these values use Ogre::TextureUnitState::setScrollAnimation
u | The amount the texture should be moved horizontally (u direction). |
v | The amount the texture should be moved vertically (v direction). |
Sets up an animated scroll for the texture layer.
Useful for creating constant scrolling effects on a texture layer (for varying scrolls, see Ogre::TextureUnitState::setTransformAnimation).
uSpeed | The number of horizontal loops per second (+ve=moving right, -ve = moving left). |
vSpeed | The number of vertical loops per second (+ve=moving up, -ve= moving down). |
Sets the anticlockwise rotation factor applied to texture coordinates.
This sets a fixed rotation angle - if you wish to animate this, use Ogre::TextureUnitState::setRotateAnimation
angle | The angle of rotation (anticlockwise). |
Sets up an animated texture rotation for this layer.
Useful for constant rotations (for varying rotations, see Ogre::TextureUnitState::setTransformAnimation).
speed | The number of complete anticlockwise revolutions per second (use -ve for clockwise) |
Sets the scaling factor applied to texture coordinates.
This method sets the scale element of the texture transformation, and is easier to use than setTextureTransform if you are combining translation, scaling and rotation in your texture transformation.
If you want to animate these values use Ogre::TextureUnitState::setTransformAnimation
uScale | The value by which the texture is to be scaled horizontally. |
vScale | The value by which the texture is to be scaled vertically. |
Sets up a general time-relative texture modification effect.
This can be called multiple times for different values of ttype, but only the latest effect applies if called multiple time for the same ttype.
ttype | The type of transform, either translate (scroll), scale (stretch) or rotate (spin). |
waveType | The shape of the wave, see Ogre::WaveformType enum for details. |
base | The base value for the function (range of output = {base, base + amplitude}). |
frequency | The speed of the wave in cycles per second. |
phase | The offset of the start of the wave, e.g. 0.5 to start half-way through the wave. |
amplitude | Scales the output so that instead of lying within 0..1 it lies within 0..1*amplitude for exaggerated effects. |
ttype is one of
Animate the u scroll value
Animate the v scroll value
Animate the rotate value
Animate the u scale value
Animate the v scale value
waveType is one of Ogre::WaveformType without the WFT_
prefix. E.g. WFT_SQUARE
becomes square
.
This attribute allows you to specify a static 4x4 transformation matrix for the texture unit, thus replacing the individual scroll, rotate and scale attributes mentioned above.
The indexes of the 4x4 matrix value above are expressed as m<row><col>.
By default all texture units use a shared default Sampler object. This parameter allows you to explicitly set a different one.
Enables Unordered Access to the provided mipLevel of the texture.
Samplers allow you to quickly change the settings for all associated Textures. Typically you have many Textures but only a few sampling states in your application.
Defines what happens when texture coordinates exceed 1.0 for this texture layer.You can use the simple format to specify the addressing mode for all 3 potential texture coordinates at once, or you can use the 2/3 parameter extended format to specify a different mode per texture coordinate.
Valid values for both are one of Ogre::TextureAddressingMode without the TAM_
prefix. E.g. TAM_WRAP
becomes wrap
.
Sets the border colour of border texture address mode (see tex_address_mode).
Sets the type of texture filtering used when magnifying or minifying a texture. There are 2 formats to this attribute, the simple format where you simply specify the name of a predefined set of filtering options, and the complex format, where you individually set the minification, magnification, and mip filters yourself.
With this format, you only need to provide a single parameter
No filtering or mipmapping is used.
Equal to: min=Ogre::FO_POINT, mag=Ogre::FO_POINT, mip=Ogre::FO_NONE
2x2 box filtering is performed when magnifying or reducing a texture, and a mipmap is picked from the list but no filtering is done between the levels of the mipmaps.
Equal to: min=Ogre::FO_LINEAR, mag=Ogre::FO_LINEAR, mip=Ogre::FO_POINT
2x2 box filtering is performed when magnifying and reducing a texture, and the closest 2 mipmaps are filtered together.
Equal to: min=Ogre::FO_LINEAR, mag=Ogre::FO_LINEAR, mip=Ogre::FO_LINEAR
This is the same as ’trilinear’, except the filtering algorithm takes account of the slope of the triangle in relation to the camera rather than simply doing a 2x2 pixel filter in all cases.
Equal to: min=Ogre::FO_ANISOTROPIC, max=Ogre::FO_ANISOTROPIC, mip=Ogre::FO_LINEAR
This format gives you complete control over the minification, magnification, and mip filters.
Each parameter can be one of Ogre::FilterOptions without the FO_
prefix. E.g. FO_LINEAR
becomes linear
.
minFilter | The filtering to use when reducing the size of the texture. Can be Ogre::FO_POINT, Ogre::FO_LINEAR or Ogre::FO_ANISOTROPIC. |
magFilter | The filtering to use when increasing the size of the texture. Can be Ogre::FO_POINT, Ogre::FO_LINEAR or Ogre::FO_ANISOTROPIC. |
mipFilter | The filtering to use between mip levels. Can be Ogre::FO_NONE (turns off mipmapping), Ogre::FO_POINT or Ogre::FO_LINEAR (trilinear filtering). |
Sets the anisotropy level to be used for this texture level.
The degree of anisotropy is the ratio between the height of the texture segment visible in a screen space region versus the width - so for example a floor plane, which stretches on into the distance and thus the vertical texture coordinates change much faster than the horizontal ones, has a higher anisotropy than a wall which is facing you head on (which has an anisotropy of 1 if your line of sight is perfectly perpendicular to it).The maximum value is determined by the hardware, but it is usually 8 or 16.
In order for this to be used, you have to set the minification and/or the magnification option on this texture to Ogre::FO_ANISOTROPIC.
maxAniso | The maximal anisotropy level, should be between 2 and the maximum supported by hardware (1 is the default, ie. no anisotropy). |
You can alter the mipmap calculation by biasing the result with a single floating point value. After the mip level has been calculated, this bias value is added to the result to give the final mip level. Lower mip levels are larger (higher detail), so a negative bias will force the larger mip levels to be used, and a positive bias will cause smaller mip levels to be used. The bias values are in mip levels, so a -1 bias will force mip levels one larger than by the default calculation.
In order for this option to be used, your hardware has to support mipmap biasing (exposed through Ogre::RSC_MIPMAP_LOD_BIAS), and your minification filtering has to be set to point or linear.
Enables or disables the comparison test for depth textures.
When enabled, sampling the texture returns how the sampled value compares against a reference value instead of the sampled value itself. Combined with linear filtering this can be used to implement hardware PCF for shadow maps.
The comparison func to use when compare_test
is enabled
func | one of Ogre::CompareFunction without the CMPF_ prefix. E.g. CMPF_LESS_EQUAL becomes less_equal . |
Within a pass section of a material script, you can reference a vertex, geometry, tessellation, compute, and / or a fragment program which has been defined in GPU Program Scripts. The programs are defined separately from the usage of them in the pass, since the programs are very likely to be reused between many separate materials, probably across many different .material scripts, so this approach lets you define the program only once and use it many times.
As well as naming the program in question, you can also provide parameters to it. Here’s a simple example:
In this example, we bind a vertex program called ’myVertexProgram’ (which will be defined elsewhere) to the pass, and give it 2 parameters, one is an ’auto’ parameter, meaning we do not have to supply a value as such, just a recognised code (in this case it’s the world/view/projection matrix which is kept up to date automatically by Ogre). The second parameter is a manually specified parameter, a 4-element float. The indexes are described later.
The syntax of the link to a vertex program and a fragment or geometry program are identical, the only difference is that fragment_program_ref
and geometry_program_ref
are used respectively instead of vertex_program_ref
. For tessellation shaders, use tessellation_hull_program_ref
and tessellation_domain_program_ref
to link to the hull tessellation program and the domain tessellation program respectively. Compute shader programs can be linked with compute_program_ref
.
For many situations vertex, geometry and fragment programs are associated with each other in a pass but this is not cast in stone. You could have a vertex program that can be used by several different fragment programs.
Moreover, older APIs permit the use of both fixed pipeline and programmable pipeline (shaders) simultaneously. Specifically, the OpenGL compatibility profile and Direct3D SM2.x allow this. You can utilize the vertex fixed function pipeline and just provide a fragment_program_ref
in a pass, with no vertex program reference included. The fragment program must comply with the specified requirements of the related API in order to access the outputs of the vertex fixed pipeline. Alternatively, you can employ a vertex program that directly feeds into the fragment fixed function pipeline. Most of Ogre's render systems do not support the Fixed Function pipeline. In that case, if you supply vertex shader, you will need to supply a fragment shader as well.
If a new technique or pass needs to be added to a copied material then use a unique name for the technique or pass that does not exist in the parent material. Using an index for the name that is one greater than the last index in the parent will do the same thing. The new technique/pass will be added to the end of the techniques/passes copied from the parent material.
A specific texture unit state (TUS) can be given a unique name within a pass of a material so that it can be identified later in cloned materials that need to override specified texture unit states in the pass without declaring previous texture units. Using a unique name for a Texture unit in a pass of a cloned material adds a new texture unit at the end of the texture unit list for the pass.