OGRE
1.11.6
Object-Oriented Graphics Rendering Engine
|
Sets the specific operation used to blend source and destination pixels together.
Material scripts offer you the ability to define complex materials in a script which can be reused easily. Whilst you could set up all materials for a scene in code using the methods of the Material and TextureLayer classes, in practice it's a bit unwieldy. Instead you can store material definitions in text files which can then be loaded whenever required.
It’s important to realise that materials are not loaded completely by the parsing process: only the definition is loaded, no textures or other resources are loaded. This is because it is common to have a large library of materials, but only use a relatively small subset of them in any one scene. To load every material completely in every script would therefore cause unnecessary memory overhead. You can access a ’deferred load’ Material in the normal way (Ogre::MaterialManager::getSingleton().getByName()), but you must call the ’load’ method before trying to use it. Ogre does this for you when using the normal material assignment methods of entities etc.
To start with, we only consider fixed-function materials which don’t use vertex, geometry or fragment programs, these are covered later:
A material can be made up of many Techniques - a technique is one way of achieving the effect you are looking for. You can supply more than one technique in order to provide fallback approaches where a card does not have the ability to render the preferred technique, or where you wish to define lower level of detail versions of the material in order to conserve rendering power when objects are more distant.
Each technique can be made up of many Passes, that is a complete render of the object can be performed multiple times with different settings in order to produce composite effects. Ogre may also split the passes you have defined into many passes at runtime, if you define a pass which uses too many texture units for the card you are currently running on (note that it can only do this if you are not using a fragment program). Each pass has a number of top-level attributes such as ’ambient’ to set the amount & colour of the ambient light reflected by the material. Some of these options do not apply if you are using vertex programs, See Passes for more details.
Within each pass, there can be zero or many Texture Units in use. These define the texture to be used, and optionally some blending operations (which use multitexturing) and texture effects.
You can also reference vertex and fragment programs (or vertex and pixel shaders, if you want to use that terminology) in a pass with a given set of parameters. Programs themselves are declared in separate .program scripts (See Declaring GPU Programs) and are used as described in Using GPU Programs in a Pass.
The outermost section of a material definition does not have a lot of attributes of its own (most of the configurable parameters are within the child sections. However, it does have some, and here they are:
Sets the name of the LOD strategy to use. Defaults to ’Distance’ which means LOD changes based on distance from the camera. Also supported is ’PixelCount’ which changes LOD based on an estimate of the screen-space pixels affected.
This attribute defines the values used to control the LOD transition for this material. By setting this attribute, you indicate that you want this material to alter the Technique that it uses based on some metric, such as the distance from the camera, or the approximate screen space coverage. The exact meaning of these values is determined by the option you select for lod_strategy - it is a list of distances for the ’Distance’ strategy, and a list of pixel counts for the ’PixelCount’ strategy, for example. You must give it a list of values, in order from highest LOD value to lowest LOD value, each one indicating the point at which the material will switch to the next LOD. Implicitly, all materials activate LOD index 0 for values less than the first entry, so you do not have to specify ’0’ at the start of the list. You must ensure that there is at least one Technique with a lod_index value for each value in the list (so if you specify 3 values, you must have techniques for LOD indexes 0, 1, 2 and 3). Note you must always have at least one Technique at lod_index 0.
The above example would cause the material to use the best Technique at lod_index 0 up to a distance of 300 world units, the best from lod_index 1 from 300 up to 600, lod_index 2 from 600 to 1200, and lod_index 3 from 1200 upwards.
This attribute controls whether objects using this material can have shadows cast upon them.
Whether or not an object receives a shadow is the combination of a number of factors, See Shadows for full details; however this allows you to make a material opt-out of receiving shadows if required. Note that transparent materials never receive shadows so this option only has an effect on solid materials.
This attribute controls whether transparent materials can cast certain kinds of shadow.
Whether or not an object casts a shadow is the combination of a number of factors, See Shadows for full details; however this allows you to make a transparent material cast shadows, when it would otherwise not. For example, when using texture shadows, transparent materials are normally not rendered into the shadow texture because they should not block light. This flag overrides that.
This attribute associates a texture alias with a texture name.
This attribute can be used to set the textures used in texture unit states that were inherited from another material.(See Texture Aliases)
A "technique" section in your material script encapsulates a single method of rendering an object. The simplest of material definitions only contains a single technique, however since PC hardware varies quite greatly in it’s capabilities, you can only do this if you are sure that every card for which you intend to target your application will support the capabilities which your technique requires. In addition, it can be useful to define simpler ways to render a material if you wish to use material LOD, such that more distant objects use a simpler, less performance-hungry technique.
When a material is used for the first time, it is ’compiled’. That involves scanning the techniques which have been defined, and marking which of them are supportable using the current rendering API and graphics card. If no techniques are supportable, your material will render as blank white. The compilation examines a number of things, such as:
In a material script, techniques must be listed in order of preference, i.e. the earlier techniques are preferred over the later techniques. This normally means you will list your most advanced, most demanding techniques first in the script, and list fallbacks afterwards.
To help clearly identify what each technique is used for, the technique can be named but its optional. Techniques not named within the script will take on a name that is the technique index number. For example: the first technique in a material is index 0, its name would be "0" if it was not given a name in the script. The technique name must be unique within the material or else the final technique is the resulting merge of all techniques with the same name in the material. A warning message is posted in the Ogre.log if this occurs. Named techniques can help when inheriting a material and modifying an existing technique: (See Script Inheritance)
Techniques have only a small number of attributes of their own:
Sets the ’scheme’ this Technique belongs to. Material schemes are used to control top-level switching from one set of techniques to another. For example, you might use this to define ’high’, ’medium’ and ’low’ complexity levels on materials to allow a user to pick a performance / quality ratio. Another possibility is that you have a fully HDR-enabled pipeline for top machines, rendering all objects using unclamped shaders, and a simpler pipeline for others; this can be implemented using schemes. The active scheme is typically controlled at a viewport level, and the active one defaults to ’Default’.
Sets the level-of-detail (LOD) index this Technique belongs to.
All techniques must belong to a LOD index, by default they all belong to index 0, i.e. the highest LOD. Increasing indexes denote lower levels of detail. You can (and often will) assign more than one technique to the same LOD index, what this means is that OGRE will pick the best technique of the ones listed at the same LOD index. For readability, it is advised that you list your techniques in order of LOD, then in order of preference, although the latter is the only prerequisite (OGRE determines which one is ’best’ by which one is listed first). You must always have at least one Technique at lod_index 0. The distance at which a LOD level is applied is determined by the lod_distances attribute of the containing material, See lod_distances for details.
Techniques also contain one or more Passes (and there must be at least one).
When using Texture-based Shadows you can specify an alternate material to use when rendering the object using this material into the shadow texture. This is like a more advanced version of using shadow_caster_vertex_program, however note that for the moment you are expected to render the shadow in one pass, i.e. only the first pass is respected.
When using Texture-based Shadows you can specify an alternate material to use when performing the receiver shadow pass. Note that this explicit ’receiver’ pass is only done when you’re not using Integrated Texture Shadows - i.e. the shadow rendering is done separately (either as a modulative pass, or a masked light pass). This is like a more advanced version of using shadow_receiver_vertex_program and shadow_receiver_fragment_program, however note that for the moment you are expected to render the shadow in one pass, i.e. only the first pass is respected.
Although Ogre does a good job of detecting the capabilities of graphics cards and setting the supportability of techniques from that, occasionally card-specific behaviour exists which is not necessarily detectable and you may want to ensure that your materials go down a particular path to either use or avoid that behaviour. This is what these rules are for - you can specify matching rules so that a technique will be considered supportable only on cards from a particular vendor, or which match a device name pattern, or will be considered supported only if they don’t fulfil such matches. The format of the rules are as follows:
An ’include’ rule means that the technique will only be supported if one of the include rules is matched (if no include rules are provided, anything will pass). An ’exclude’ rules means that the technique is considered unsupported if any of the exclude rules are matched. You can provide as many rules as you like, although <vendor_name> and <device_pattern> must obviously be unique. The valid list of <vendor_name> values is currently ’nvidia’, ’ati’, ’intel’, ’s3’, ’matrox’ and ’3dlabs’. <device_pattern> can be any string, and you can use wildcards (’*’) if you need to match variants. Here’s an example:
These rules, if all included in one technique, will mean that the technique will only be considered supported on graphics cards made by NVIDIA and Intel, and so long as the device name doesn’t have ’950’ in it.
Note that these rules can only mark a technique ’unsupported’ when it would otherwise be considered ’supported’ judging by the hardware capabilities. Even if a technique passes these rules, it is still subject to the usual hardware support tests.
A pass is a single render of the geometry in question; a single call to the rendering API with a certain set of rendering properties. A technique can have between one and 16 passes, although clearly the more passes you use, the more expensive the technique will be to render.
To help clearly identify what each pass is used for, the pass can be named but its optional. Passes not named within the script will take on a name that is the pass index number. For example: the first pass in a technique is index 0 so its name would be "0" if it was not given a name in the script. The pass name must be unique within the technique or else the final pass is the resulting merge of all passes with the same name in the technique. A warning message is posted in the Ogre.log if this occurs. Named passes can help when inheriting a material and modifying an existing pass: (See Script Inheritance)
Passes have a set of global attributes (described below) and optionally
Here are the attributes you can use in a ’pass’ section of a .material script:
Sets the ambient colour reflectance properties of this pass.
This property determines how much ambient light (directionless global light) is reflected. The default is full white, meaning objects are completely globally illuminated. Reduce this if you want to see diffuse or specular light effects, or change the blend of colours to make the object have a base colour other than white.
It is also possible to make the ambient reflectance track the vertex colour as defined in the mesh instead of the colour values.
Sets the diffuse colour reflectance properties of this pass.
This property determines how much diffuse light (light from instances of the Light class in the scene) is reflected. The default is full white, meaning objects reflect the maximum white light they can from Light objects.
It is also possible to make the diffuse reflectance track the vertex colour as defined in the mesh instead of the colour values.
Sets the specular colour reflectance properties of this pass.
This property determines how much specular light (highlights from instances of the Light class in the scene) is reflected. The default is to reflect no specular light. The colour of the specular highlights is determined by the colour parameters, and the size of the highlights by the separate shininess parameter. It is also possible to make the specular reflectance track the vertex colour as defined in the mesh instead of the colour values.
The higher the value of the shininess parameter, the sharper the highlight i.e. the radius is smaller. Beware of using shininess values in the range of 0 to 1 since this causes the the specular colour to be applied to the whole surface that has the material applied to it. When the viewing angle to the surface changes, ugly flickering will also occur when shininess is in the range of 0 to 1. Shininess values between 1 and 128 work best in both DirectX and OpenGL renderers.
Sets the amount of self-illumination an object has.
Unlike the name suggests, this object doesn’t act as a light source for other objects in the scene (if you want it to, you have to create a light which is centered on the object). If an object is self-illuminating, it does not need external sources to light it, ambient or otherwise. It's like the object has it's own personal ambient light. This property is rarely useful since you can already specify per-pass ambient light, but is here for completeness.
It is also possible to make the emissive reflectance track the vertex colour as defined in the mesh instead of the colour values.
Sets the kind of blending this pass has with the existing contents of the scene.
Whereas the texture blending operations seen in the texture_unit entries are concerned with blending between texture layers, this blending is about combining the output of this pass as a whole with the existing contents of the rendering target. This blending therefore allows object transparency and other special effects.
There are 2 formats, one using predefined blend types, the other allowing a roll-your-own approach using source and destination factors.
This is the simpler form, where the most commonly used blending modes are enumerated using a single parameter.
blend_type |
|
This version of the method allows complete control over the blending operation, by specifying the source and destination blending factors.
By default the operation is Ogre::SBO_ADD, which creates this equation
$$final = (passOutput * sourceFactor) + (frameBuffer * destFactor)$$
Each of the factors is specified as one of Ogre::SceneBlendFactor.
By setting a different Ogre::SceneBlendOperation you can achieve a different effect.
sourceFactor | The source factor in the above calculation, i.e. multiplied by the output of the Pass. |
destFactor | The destination factor in the above calculation, i.e. multiplied by the Frame Buffer contents. |
Valid values for both parameters are one of Ogre::SceneBlendFactor without the SBF_
prefix. E.g. SBF_DEST_COLOUR
becomes dest_colour
.
Also see separate_scene_blend.
This option operates in exactly the same way as scene_blend, except that it allows you to specify the operations to perform between the rendered pixel and the frame buffer separately for colour and alpha components. By nature this option is only useful when rendering to targets which have an alpha channel which you’ll use for later processing, such as a render texture.
This example would add colour components but multiply alpha components. The blend modes available are as in scene_blend. The more advanced form is also available:
Again the options available in the second format are the same as those in the second format of scene_blend.
This directive changes the operation which is applied between the two components of the scene blending equation
op | The blending operation mode to use for this pass You may change this to ’add’, ’subtract’, ’reverse_subtract’, ’min’ or ’max’. |
This directive is as scene_blend_op, except that you can set the operation for colour and alpha separately.
Sets whether or not this pass renders with depth-buffer checking on or not.
If depth-buffer checking is on, whenever a pixel is about to be written to the frame buffer the depth buffer is checked to see if the pixel is in front of all other pixels written at that point. If not, the pixel is not written.
If depth checking is off, pixels are written no matter what has been rendered before. Also see setDepthFunction for more advanced depth check configuration.
Default: depth_check on
Sets whether or not this pass renders with depth-buffer writing on or not.
If depth-buffer writing is on, whenever a pixel is written to the frame buffer the depth buffer is updated with the depth value of that new pixel, thus affecting future rendering operations if future pixels are behind this one.
If depth writing is off, pixels are written without updating the depth buffer Depth writing should normally be on but can be turned off when rendering static backgrounds or when rendering a collection of transparent objects at the end of a scene so that they overlap each other correctly.
Sets the function used to compare depth values when depth checking is on.
If depth checking is enabled (see setDepthCheckEnabled) a comparison occurs between the depth value of the pixel to be written and the current contents of the buffer. This comparison is normally Ogre::CMPF_LESS_EQUAL.
func | one of Ogre::CompareFunction without the CMPF_ prefix. E.g. CMPF_LESS_EQUAL becomes less_equal . |
Sets the bias applied to the depth value of this pass.
When polygons are coplanar, you can get problems with 'depth fighting' where the pixels from the two polys compete for the same screen pixel. This is particularly a problem for decals (polys attached to another surface to represent details such as bulletholes etc.).
A way to combat this problem is to use a depth bias to adjust the depth buffer value used for the decal such that it is slightly higher than the true value, ensuring that the decal appears on top. There are two aspects to the biasing, a constant bias value and a slope-relative biasing value, which varies according to the maximum depth slope relative to the camera, ie:
$$finalBias = maxSlope * slopeScaleBias + constantBias$$
Slope scale biasing is relative to the angle of the polygon to the camera, which makes for a more appropriate bias value, but this is ignored on some older hardware. Constant biasing is expressed as a factor of the minimum depth value, so a value of 1 will nudge the depth by one ’notch’ if you will.
constantBias | The constant bias value |
slopeScaleBias | The slope-relative bias value |
Also see iteration_depth_bias
Sets an additional bias derived from the number of times a given pass has been iterated. Operates just like depth_bias except that it applies an additional bias factor to the base depth_bias value, multiplying the provided value by the number of times this pass has been iterated before, through one of the iteration variants. So the first time the pass will get the depth_bias value, the second time it will get depth_bias + iteration_depth_bias, the third time it will get depth_bias + iteration_depth_bias * 2, and so on. The default is zero.
Sets the way the pass will have use alpha to totally reject pixels from the pipeline.
The function parameter can be any of the options listed in the material depth_function attribute. The value parameter can theoretically be any value between 0 and 255, but is best limited to 0 or 128 for hardware compatibility.
Sets whether this pass will use ’alpha to coverage’,
Alpha to coverage performs multisampling on the edges of alpha-rejected textures to produce a smoother result. It is only supported when multisampling is already enabled on the render target, and when the hardware supports alpha to coverage (see RenderSystemCapabilities). The common use for alpha to coverage is foliage rendering and chain-link fence style textures.
Sets whether when rendering this pass, rendering will be limited to a screen-space scissor rectangle representing the coverage of the light(s) being used in this pass.
In order to cut down on fillrate when you have a number of fixed-range lights in the scene, you can enable this option to request that during rendering, only the region of the screen which is covered by the lights is rendered. This region is the screen-space rectangle covering the union of the spheres making up the light ranges. Directional lights are ignored for this.
This is only likely to be useful for multipass additive lighting algorithms, where the scene has already been 'seeded' with an ambient pass and this pass is just adding light in affected areas.
When using Ogre::SHADOWTYPE_STENCIL_ADDITIVE or Ogre::SHADOWTYPE_TEXTURE_ADDITIVE, this option is implicitly used for all per-light passes and does not need to be specified. If you are not using shadows or are using a modulative or Integrated Texture Shadows then this could be useful.
Sets whether when rendering this pass, triangle setup will be limited to clipping volume covered by the light.
This option will only function if there is a single non-directional light being used in this pass. If there is more than one light, or only directional lights, then no clipping will occur. If there are no lights at all then the objects won’t be rendered at all.
In order to cut down on the geometry set up to render this pass when you have a single fixed-range light being rendered through it, you can enable this option to request that during triangle setup, clip planes are defined to bound the range of the light. In the case of a point light these planes form a cube, and in the case of a spotlight they form a pyramid. Directional lights are never clipped.
This option is only likely to be useful for multipass additive lighting algorithms, where the scene has already been 'seeded' with an ambient pass and this pass is just adding light in affected areas. In addition, it will only be honoured if there is exactly one non-directional light being used in this pass. Also, these clip planes override any user clip planes set on Camera.
When using Ogre::SHADOWTYPE_STENCIL_ADDITIVE or Ogre::SHADOWTYPE_TEXTURE_ADDITIVE, this option is automatically used for all per-light passes if you enable Ogre::SceneManager::setShadowUseLightClipPlanes and does not need to be specified. It is disabled by default since clip planes have a cost of their own which may not always exceed the benefits they give you. Generally the smaller your lights are the more chance you’ll see a benefit rather than a penalty from clipping.
When using an additive lighting mode (Ogre::SHADOWTYPE_STENCIL_ADDITIVE or Ogre::SHADOWTYPE_TEXTURE_ADDITIVE), the scene is rendered in 3 discrete stages, ambient (or pre-lighting), per-light (once per light, with shadowing) and decal (or post-lighting). Usually OGRE figures out how to categorise your passes automatically, but there are some effects you cannot achieve without manually controlling the illumination. For example specular effects are muted by the typical sequence because all textures are saved until the Ogre::IS_DECAL stage which mutes the specular effect. Instead, you could do texturing within the per-light stage if it's possible for your material and thus add the specular on after the decal texturing, and have no post-light rendering.
If you assign an illumination stage to a pass you have to assign it to all passes in the technique otherwise it will be ignored. Also note that whilst you can have more than one pass in each group, they cannot alternate, ie all ambient passes will be before all per-light passes, which will also be before all decal passes. Within their categories the passes will retain their ordering though.
Sets whether or not this pass renders with all vertex normals being automatically re-normalised.
This option can be used to prevent lighting variations when scaling an object - normally because this scaling is hardware based, the normals get scaled too which causes lighting to become inconsistent. By default the SceneManager detects scaled objects and does this for you, but this has an overhead so you might want to turn that off through Ogre::SceneManager::setNormaliseNormalsOnScale(false) and only do it per-Pass when you need to.
Sets if transparent textures should be sorted by depth or not.
By default all transparent materials are sorted such that renderables furthest away from the camera are rendered first. This is usually the desired behaviour but in certain cases this depth sorting may be unnecessary and undesirable. If for example it is necessary to ensure the rendering order does not change from one frame to the next. In this case you could set the value to ’off’ to prevent sorting.
You can also use the keyword ’force’ to force transparent sorting on, regardless of other circumstances. Usually sorting is only used when the pass is also transparent, and has a depth write or read which indicates it cannot reliably render without sorting. By using ’force’, you tell OGRE to sort this pass no matter what other circumstances are present.
Sets the hardware culling mode for this pass.
A typical way for the rendering engine to cull triangles is based on the 'vertex winding' of triangles. Vertex winding refers to the direction in which the vertices are passed or indexed to in the rendering operation as viewed from the camera, and will wither be clockwise or anticlockwise (that's 'counterclockwise' for you Americans out there ;) The default is Ogre::CULL_CLOCKWISE i.e. that only triangles whose vertices are passed/indexed in anticlockwise order are rendered - this is a common approach and is used in 3D studio models for example. You can alter this culling mode if you wish but it is not advised unless you know what you are doing.
You may wish to use the Ogre::CULL_NONE option for mesh data that you cull yourself where the vertex winding is uncertain or for creating 2-sided passes.
Sets the software culling mode for this pass.
In some situations you want to use manual culling of triangles rather than sending the triangles to the hardware and letting it cull them. This setting only takes effect on SceneManager's that use it (since it is best used on large groups of planar world geometry rather than on movable geometry since this would be expensive), but if used can cull geometry before it is sent to the hardware.
In this case the culling is based on whether the ’back’ or ’front’ of the triangle is facing the camera - this definition is based on the face normal (a vector which sticks out of the front side of the polygon perpendicular to the face). Since Ogre expects face normals to be on anticlockwise side of the face, Ogre::MANUAL_CULL_BACK is the software equivalent of Ogre::CULL_CLOCKWISE setting, which is why they are both the default. The naming is different to reflect the way the culling is done though, since most of the time face normals are pre-calculated and they don’t have to be the way Ogre expects - you could set Ogre::CULL_NONE and completely cull in software based on your own face normals, if you have the right SceneManager which uses them.
Sets whether or not dynamic lighting is turned on for this pass or not.
Turning dynamic lighting off makes any ambient, diffuse, specular, emissive and shading properties for this pass redundant. If lighting is turned off, all objects rendered using the pass will be fully lit. When lighting is turned on, objects are lit according to their vertex normals for diffuse and specular light, and globally for ambient and emissive.
Sets the kind of shading which should be used for representing dynamic lighting for this pass.
When dynamic lighting is turned on, the effect is to generate colour values at each vertex. Whether these values are interpolated across the face (and how) depends on this setting. The default shading method is Ogre::SO_GOURAUD.
mode | one of Ogre::ShadeOptions without the SO_ prefix. E.g. SO_FLAT becomes flat . |
Sets how polygons should be rasterised, i.e. whether they should be filled in, or just drawn as lines or points. The default shading method is Ogre::PM_SOLID.
mode | one of Ogre::PolygonMode without the PM_ prefix. E.g. PM_SOLID becomes solid . |
Sets whether or not the polygon_mode set on this pass can be downgraded by the camera
override | If set to false, this pass will always be rendered at its own chosen polygon mode no matter what the camera says. The default is true. |
Tells the pass whether it should override the scene fog settings, and enforce it’s own. Very useful for things that you don’t want to be affected by fog when the rest of the scene is fogged, or vice versa. Note that this only affects fixed-function fog - the original scene fog parameters are still sent to shaders which use the fog_params parameter binding (this allows you to turn off fixed function fog and calculate it in the shader instead; if you want to disable shader fog you can do that through shader parameters anyway).
If you specify ’true’ for the first parameter and you supply the rest of the parameters, you are telling the pass to use these fog settings in preference to the scene settings, whatever they might be. If you specify ’true’ but provide no further parameters, you are telling this pass to never use fogging no matter what the scene says.
type | none = No fog, equivalent of just using ’fog_override true’ linear = Linear fog from the <start> and <end> distances exp = Fog increases exponentially from the camera (fog = 1/e^(distance * density)), use <density> param to control it exp2 = Fog increases at the square of FOG_EXP, i.e. even quicker (fog = 1/e^(distance * density)^2), use <density> param to control it |
colour | Sequence of 3 floating point values from 0 to 1 indicating the red, green and blue intensities |
density | The density parameter used in the ’exp’ or ’exp2’ fog types. Not used in linear mode but param must still be there as a placeholder |
start | The start distance from the camera of linear fog. Must still be present in other modes, even though it is not used. |
end | The end distance from the camera of linear fog. Must still be present in other modes, even though it is not used. |
Sets whether this pass renders with colour writing on or not. Alternatively, it can also be used to enable/disable colour writing specific channels. In the second format, the parameters are in the red, green, blue, alpha order.
If colour writing is off no visible pixels are written to the screen during this pass. You might think this is useless, but if you render with colour writing off, and with very minimal other settings, you can use this pass to initialise the depth buffer before subsequently rendering other passes which fill in the colour data. This can give you significant performance boosts on some newer cards, especially when using complex fragment programs, because if the depth check fails then the fragment program is never run.
Sets the first light which will be considered for use with this pass.
Normally the lights passed to a pass will start from the beginning of the light list for this object. This option allows you to make this pass start from a higher light index, for example if one of your earlier passes could deal with lights 0-3, and this pass dealt with lights 4+. This option also has an interaction with pass iteration, in that if you choose to iterate this pass per light too, the iteration will only begin from light 4.
Sets the maximum number of lights which will be considered for use with this pass.
The maximum number of lights which can be used when rendering fixed-function materials is set by the rendering system, and is typically set at 8. When you are using the programmable pipeline (See Using Vertex/Geometry/Fragment Programs in a Pass) this limit is dependent on the program you are running, or, if you use ’iteration once_per_light’ or a variant (See iteration), it effectively only bounded by the number of passes you are willing to use. If you are not using pass iteration, the light limit applies once for this pass. If you are using pass iteration, the light limit applies across all iterations of this pass - for example if you have 12 lights in range with an ’iteration once_per_light’ setup but your max_lights is set to 4 for that pass, the pass will only iterate 4 times.
Sets whether or not this pass is iterated, i.e. issued more than once.
The pass is only executed once which is the default behaviour.
The pass is executed once for each point light.
The render state for the pass will be setup and then the draw call will execute 5 times.
The render state for the pass will be setup and then the draw call will execute 5 times. This will be done for each point light.
The render state for the pass will be setup and the draw call executed once for every 2 lights.
By default, passes are only issued once. However, if you use the programmable pipeline, or you wish to exceed the normal limits on the number of lights which are supported, you might want to use the once_per_light option. In this case, only light index 0 is ever used, and the pass is issued multiple times, each time with a different light in light index 0. Clearly this will make the pass more expensive, but it may be the only way to achieve certain effects such as per-pixel lighting effects which take into account 1..n lights.
Using a number instead of "once" instructs the pass to iterate more than once after the render state is setup. The render state is not changed after the initial setup so repeated draw calls are very fast and ideal for passes using programmable shaders that must iterate more than once with the same render state i.e. shaders that do fur, motion blur, special filtering.
If you use once_per_light, you should also add an ambient pass to the technique before this pass, otherwise when no lights are in range of this object it will not get rendered at all; this is important even when you have no ambient light in the scene, because you would still want the objects silhouette to appear.
The lightType parameter to the attribute only applies if you use once_per_light, per_light, or per_n_lights and restricts the pass to being run for lights of a single type (either ’point’, ’directional’ or ’spot’). In the example, the pass will be run once per point light. This can be useful because when you’re writing a vertex / fragment program it is a lot easier if you can assume the kind of lights you’ll be dealing with. However at least point and directional lights can be dealt with in one way. Default: iteration once
Example: Simple Fur shader material script that uses a second pass with 10 iterations to grow the fur:
This setting allows you to change the size of points when rendering a point list, or a list of point sprites. The interpretation of this command depends on the Ogre::Pass::setPointAttenuation option - if it is off (the default), the point size is in screen pixels, if it is on, it expressed as normalised screen coordinates (1.0 is the height of the screen) when the point is at the origin.
This setting specifies whether or not hardware point sprite rendering is enabled for this pass. Enabling it means that a point list is rendered as a list of quads rather than a list of dots. It is very useful to use this option if you are using a BillboardSet and only need to use point oriented billboards which are all of the same size. You can also use it for any other point list render.
Defines whether point size is attenuated with view space distance, and in what fashion.
When performing point rendering or point sprite rendering, point size can be attenuated with distance. The equation for doing this is
$$attenuation = 1 / (constant + linear * dist + quadratic * d^2)$$
For example, to disable distance attenuation (constant screensize) you would set constant to 1, and linear and quadratic to 0. A standard perspective attenuation would be 0, 1, 0 respectively.
The resulting size is clamped to the minimum and maximum point size.
enabled | Whether point attenuation is enabled |
constant,linear,quadratic | Parameters to the attenuation function defined above |
Sets the minimum point size after attenuation (point_size_attenuation). For details on the size metrics, See point_size.
Sets the maximum point size after attenuation (point_size_attenuation). For details on the size metrics, See point_size. A value of 0 means the maximum is set to the same as the max size reported by the current card.
This property determines what width is used to render lines.
Here are the attributes you can use in a ’texture_unit’ section of a .material script:
Additionally you can use all attributes of Samplers directly to implicitly create a Ogre::Sampler contained in this TextureUnit.
You can also use a nested ’texture_source’ section in order to use a special add-in as a source of texture data, See External Texture Sources for details.
Sets the alias name for this texture unit.
Setting the texture alias name is useful if this material is to be inherited by other other materials and only the textures will be changed in the new material.(See Texture Aliases) Default: If a texture_unit has a name then the texture_alias defaults to the texture_unit name.
Sets the name of the static texture image this layer will use.
This setting is mutually exclusive with the anim_texture attribute. Note that the texture file cannot include spaces. Those of you Windows users who like spaces in filenames, please get over it and use underscores instead.
type | specify a the type of texture to create - the default is ’2d’, but you can override this; here’s the full list:
|
numMipMaps | specify the number of mipmaps to generate for this texture. The default is ’unlimited’ which means mips down to 1x1 size are generated. You can specify a fixed number (even 0) if you like instead. Note that if you use the same texture in many material scripts, the number of mipmaps generated will conform to the number specified in the first texture_unit used to load the texture - so be consistent with your usage. |
alpha | specify that a single channel (luminance) texture should be loaded as alpha rather than the default which is to load it into the red channel. This can be helpful if you want to use alpha-only textures in the fixed function pipeline. Default: none |
PixelFormat | specify the desired pixel format of the texture to create, which may be different to the pixel format of the texture file being loaded. Bear in mind that the final pixel format will be constrained by hardware capabilities so you may not get exactly what you ask for. Names defined in Ogre::PixelFormat are valid values. |
gamma | informs the renderer that you want the graphics hardware to perform gamma correction on the texture values as they are sampled for rendering. This is only applicable for textures which have 8-bit colour channels (e.g.PF_R8G8B8). Often, 8-bit per channel textures will be stored in gamma space in order to increase the precision of the darker colours (http://en.wikipedia.org/wiki/Gamma_correction) but this can throw out blending and filtering calculations since they assume linear space colour values. For the best quality shading, you may want to enable gamma correction so that the hardware converts the texture values to linear space for you automatically when sampling the texture, then the calculations in the pipeline can be done in a reliable linear colour space. When rendering to a final 8-bit per channel display, you’ll also want to convert back to gamma space which can be done in your shader (by raising to the power 1/2.2) or you can enable gamma correction on the texture being rendered to or the render window. Note that the ’gamma’ option on textures is applied on loading the texture so must be specified consistently if you use this texture in multiple places. |
Sets the images to be used in an animated texture layer. There are 2 formats, one for implicitly determined image names, one for explicitly named images.
Animated textures are just a series of images making up the frames of the animation. All the images must be the same size, and their names must have a frame number appended before the extension, e.g. if you specify a name of "flame.jpg" with 3 frames, the image names must be "flame_0.jpg", "flame_1.jpg" and "flame_2.jpg".
You can change the active frame on a texture layer by calling the setCurrentFrame method.
name | The base name of the textures to use e.g. flame.jpg for frames flame_0.jpg, flame_1.jpg etc. |
numFrames | The number of frames in the sequence. |
duration | The length of time it takes to display the whole animation sequence, in seconds. If 0, no automatic transition occurs. |
This sets up the same duration animation but from 5 separately named image files. The first format is more concise, but the second is provided if you cannot make your images conform to the naming standard required for it.
Sets the images used in a cubic texture, i.e. one made up of 6 individual images making up the faces of a cube or 1 cube texture if supported by the texture format(DDS for example) and rendersystem.. These kinds of textures are used for reflection maps (if hardware supports cubic reflection maps) or skyboxes. There are 2 formats, a brief format expecting image names of a particular format and a more flexible but longer format for arbitrarily named textures.
texture <basename> cubic
' insteadThe base_name in this format is something like ’skybox.jpg’, and the system will expect you to provide skybox_fr.jpg, skybox_bk.jpg, skybox_up.jpg, skybox_dn.jpg, skybox_lf.jpg, and skybox_rt.jpg for the individual faces.
In this case each face is specified explicitly, incase you don’t want to conform to the image naming standards above. You can only use this for the separateUV version since the combinedUVW version requires a single texture name to be assigned to the combined 3D texture (see below).
In both cases the final parameter means the following:
The 6 textures are kept separate but are all referenced by this single texture layer. One texture at a time is active (they are actually stored as 6 frames), and they are addressed using standard 2D UV coordinates.
The 6 textures are combined into a single ’cubic’ texture map which is then addressed using 3D texture coordinates.
Some render systems, when implementing vertex texture fetch, separate the binding of textures for use in the vertex program versus those used in fragment programs. This setting allows you to target the vertex processing unit with a texture binding, in those cases. For rendersystems which have a unified binding for the vertex and fragment units, this setting makes no difference.
Format: binding_type <vertex|fragment>
Tells this texture unit where it should get its content from. The default is to get texture content from a named texture, as defined with the texture, cubic_texture, anim_texture attributes. However you can also pull texture information from other automated sources.
type |
|
Only valid when content type is compositor.
compositorName | The name of the compositor to reference. |
textureName | The name of the texture to reference. |
mrtIndex | The index of the wanted texture, if referencing an MRT. |
Sets which texture coordinate set is to be used for this texture layer. A mesh can define multiple sets of texture coordinates, this sets which one this material uses.
Determines how the colour of this texture layer is combined with the one below it (or the lighting effect on the geometry if this is the first layer).
This method is the simplest way to blend texture layers, because it requires only one parameter, gives you the most common blending types, and automatically sets up 2 blending methods: one for if single-pass multitexturing hardware is available, and another for if it is not and the blending must be achieved through multiple rendering passes. It is, however, quite limited and does not expose the more flexible multitexturing operations, simply because these can't be automatically supported in multipass fallback mode. If want to use the fancier options, use Ogre::TextureUnitState::setColourOperationEx, but you'll either have to be sure that enough multitexturing units will be available, or you should explicitly set a fallback using Ogre::TextureUnitState::setColourOpMultipassFallback.
op | One of the Ogre::LayerBlendOperation enumerated blending types. Without the LBO_ prefix. E.g. LBO_REPLACE becomes replace . |
This is an extended version of the Ogre::TextureUnitState::setColourOperation method which allows extremely detailed control over the blending applied between this and earlier layers. See the Warning below about the issues between mulitpass and multitexturing that using this method can create.
Texture colour operations determine how the final colour of the surface appears when rendered. Texture units are used to combine colour values from various sources (ie. the diffuse colour of the surface from lighting calculations, combined with the colour of the texture). This method allows you to specify the 'operation' to be used, ie. the calculation such as adds or multiplies, and which values to use as arguments, such as a fixed value or a value from a previous calculation.
The defaults for each layer are:
ie. each layer takes the colour results of the previous layer, and multiplies them with the new texture being applied. Bear in mind that colours are RGB values from 0.0 - 1.0 so multiplying them together will result in values in the same range, 'tinted' by the multiply. Note however that a straight multiply normally has the effect of darkening the textures - for this reason there are brightening operations like Ogre::LBX_MODULATE_X2. See the Ogre::LayerBlendOperation and Ogre::LayerBlendSource enumerated types for full details.
The final 3 parameters are only required if you decide to pass values manually into the operation, i.e. you want one or more of the inputs to the colour calculation to come from a fixed value that you supply. Hence you only need to fill these in if you supply Ogre::LBS_MANUAL to the corresponding source, or use the Ogre::LBX_BLEND_MANUAL operation.
op | The operation to be used, e.g. modulate (multiply), add, subtract. |
source1 | The source of the first colour to the operation e.g. texture colour. |
source2 | The source of the second colour to the operation e.g. current surface colour. |
arg1 | Manually supplied colour value (only required if source1 = LBS_MANUAL). |
arg2 | Manually supplied colour value (only required if source2 = LBS_MANUAL). |
manualBlend | Manually supplied 'blend' value - only required for operations which require manual blend e.g. LBX_BLEND_MANUAL. |
Each parameter can be one of Ogre::LayerBlendOperationEx or Ogre::LayerBlendSource without the prefix. E.g. LBX_MODULATE_X4
becomes modulate_x4
.
Sets the multipass fallback operation for this layer, if you used colour_op_ex and not enough multitexturing hardware is available.
Because some effects exposed using Ogre::TextureUnitState::setColourOperationEx are only supported under multitexturing hardware, if the hardware is lacking the system must fallback on multipass rendering, which unfortunately doesn't support as many effects. This method is for you to specify the fallback operation which most suits you.
You'll notice that the interface is the same as the Ogre::TMaterial::setSceneBlending method; this is because multipass rendering IS effectively scene blending, since each layer is rendered on top of the last using the same mechanism as making an object transparent, it's just being rendered in the same place repeatedly to get the multitexture effect.
If you use the simpler (and hence less flexible) Ogre::TextureUnitState::setColourOperation method you don't need to call this as the system sets up the fallback for you.
This works in exactly the same way as setColourOperationEx, except that the effect is applied to the level of alpha (i.e. transparency) of the texture rather than its colour. When the alpha of a texel (a pixel on a texture) is 1.0, it is opaque, whereas it is fully transparent if the alpha is 0.0. Please refer to the Ogre::TextureUnitState::setColourOperationEx method for more info.
op | The operation to be used, e.g. modulate (multiply), add, subtract |
source1 | The source of the first alpha value to the operation e.g. texture alpha |
source2 | The source of the second alpha value to the operation e.g. current surface alpha |
arg1 | Manually supplied alpha value (only required if source1 = Ogre::LBS_MANUAL) |
arg2 | Manually supplied alpha value (only required if source2 = Ogre::LBS_MANUAL) |
manualBlend | Manually supplied 'blend' value - only required for operations which require manual blend e.g. Ogre::LBX_BLEND_MANUAL |
Turns on/off texture coordinate effect that makes this layer an environment map.
Environment maps make an object look reflective by using automatic texture coordinate generation depending on the relationship between the objects vertices or normals and the eye.
A spherical environment map. Requires a single texture which is either a fish-eye lens view of the reflected scene, or some other texture which looks good as a spherical map (a texture of glossy highlights is popular especially in car sims). This effect is based on the relationship between the eye direction and the vertex normals of the object, so works best when there are a lot of gradually changing normals, i.e. curved objects.
Similar to the spherical environment map, but the effect is based on the position of the vertices in the viewport rather than vertex normals. This effect is therefore useful for planar geometry (where a spherical env_map would not look good because the normals are all the same) or objects without normals.
A more advanced form of reflection mapping which uses a group of 6 textures making up the inside of a cube, each of which is a view if the scene down each axis. Works extremely well in all cases but has a higher technical requirement from the card than spherical mapping. Requires that you bind a cubic_texture to this texture unit and use the ’combinedUVW’ option.
Generates 3D texture coordinates containing the camera space normal vector from the normal information held in the vertex data. Again, full use of this feature requires a cubic_texture with the ’combinedUVW’ option.
Sets the translation offset of the texture, ie scrolls the texture.
This method sets the translation element of the texture transformation, and is easier to use than setTextureTransform if you are combining translation, scaling and rotation in your texture transformation. If you want to animate these values use Ogre::TextureUnitState::setScrollAnimation
u | The amount the texture should be moved horizontally (u direction). |
v | The amount the texture should be moved vertically (v direction). |
Sets up an animated scroll for the texture layer.
Useful for creating constant scrolling effects on a texture layer (for varying scrolls, see Ogre::TextureUnitState::setTransformAnimation).
uSpeed | The number of horizontal loops per second (+ve=moving right, -ve = moving left). |
vSpeed | The number of vertical loops per second (+ve=moving up, -ve= moving down). |
Sets the anticlockwise rotation factor applied to texture coordinates.
This sets a fixed rotation angle - if you wish to animate this, use Ogre::TextureUnitState::setRotateAnimation
angle | The angle of rotation (anticlockwise). |
Sets up an animated texture rotation for this layer.
Useful for constant rotations (for varying rotations, see Ogre::TextureUnitState::setTransformAnimation).
speed | The number of complete anticlockwise revolutions per second (use -ve for clockwise) |
Sets the scaling factor applied to texture coordinates.
This method sets the scale element of the texture transformation, and is easier to use than setTextureTransform if you are combining translation, scaling and rotation in your texture transformation.
If you want to animate these values use Ogre::TextureUnitState::setTransformAnimation
uScale | The value by which the texture is to be scaled horizontally. |
vScale | The value by which the texture is to be scaled vertically. |
Sets up a general time-relative texture modification effect.
This can be called multiple times for different values of ttype, but only the latest effect applies if called multiple time for the same ttype.
ttype | The type of transform, either translate (scroll), scale (stretch) or rotate (spin). |
waveType | The shape of the wave, see Ogre::WaveformType enum for details. |
base | The base value for the function (range of output = {base, base + amplitude}). |
frequency | The speed of the wave in cycles per second. |
phase | The offset of the start of the wave, e.g. 0.5 to start half-way through the wave. |
amplitude | Scales the output so that instead of lying within 0..1 it lies within 0..1*amplitude for exaggerated effects. |
ttype is one of
Animate the u scroll value
Animate the v scroll value
Animate the rotate value
Animate the u scale value
Animate the v scale value
waveType is one of Ogre::WaveformType without the WFT_
prefix. E.g. WFT_SQUARE
becomes square
.
This attribute allows you to specify a static 4x4 transformation matrix for the texture unit, thus replacing the individual scroll, rotate and scale attributes mentioned above.
The indexes of the 4x4 matrix value above are expressed as m<row><col>.
By default all texture units use a shared default Sampler object. This parameter allows you to explicitly set a different one.
Samplers allow you to quickly change the settings for all associated Textures. Typically you have many Textures but only a few sampling states in your application.
Defines what happens when texture coordinates exceed 1.0 for this texture layer.You can use the simple format to specify the addressing mode for all 3 potential texture coordinates at once, or you can use the 2/3 parameter extended format to specify a different mode per texture coordinate.
Valid values for both are one of Ogre::TextureAddressingMode without the TAM_
prefix. E.g. TAM_WRAP
becomes wrap
.
Sets the border colour of border texture address mode (see tex_address_mode).
Sets the type of texture filtering used when magnifying or minifying a texture. There are 2 formats to this attribute, the simple format where you simply specify the name of a predefined set of filtering options, and the complex format, where you individually set the minification, magnification, and mip filters yourself.
With this format, you only need to provide a single parameter
This format gives you complete control over the minification, magnification, and mip filters.
Each parameter can be one of Ogre::FilterOptions without the FO_
prefix. E.g. FO_LINEAR
becomes linear
.
Enables or disables the comparison test for depth textures. When enabled, sampling the texture returns how the sampled value compares against a reference value instead of the sampled value itself. Combined with linear filtering this can be used to implement hardware PCF for shadow maps.
The comparison func to use when compare_test
is enabled
func | one of Ogre::CompareFunction without the CMPF_ prefix. E.g. CMPF_LESS_EQUAL becomes less_equal . |
In order to use a vertex, geometry or fragment program in your materials (See Using Vertex/Geometry/Fragment Programs in a Pass), you first have to define them. A single program definition can be used by any number of materials, the only prerequisite is that a program must be defined before being referenced in the pass section of a material.
The definition of a program can either be embedded in the .material script itself (in which case it must precede any references to it in the script), or if you wish to use the same program across multiple .material files, you can define it in an external .program script. You define the program in exactly the same way whether you use a .program script or a .material script, the only difference is that all .program scripts are guaranteed to have been parsed before all .material scripts, so you can guarantee that your program has been defined before any .material script that might use it. Just like .material scripts, .program scripts will be read from any location which is on your resource path, and you can define many programs in a single script.
Vertex, geometry and fragment programs can be low-level (i.e. assembler code written to the specification of a given low level syntax such as vs_1_1 or arbfp1) or high-level such as DirectX9 HLSL, Open GL Shader Language, or nVidia’s Cg language (See High-level Programs). High level languages give you a number of advantages, such as being able to write more intuitive code, and possibly being able to target multiple architectures in a single program (for example, the same Cg program might be able to be used in both D3D and GL, whilst the equivalent low-level programs would require separate techniques, each targeting a different API). High-level programs also allow you to use named parameters instead of simply indexed ones, although parameters are not defined here, they are used in the Pass.
Here is an example of a definition of a low-level vertex program:
As you can see, that’s very simple, and defining a fragment or geometry program is exactly the same, just with vertex_program replaced with fragment_program or geometry_program, respectively. You give the program a name in the header, followed by the word ’asm’ to indicate that this is a low-level program. Inside the braces, you specify where the source is going to come from (and this is loaded from any of the resource locations as with other media), and also indicate the syntax being used. You might wonder why the syntax specification is required when many of the assembler syntaxes have a header identifying them anyway - well the reason is that the engine needs to know what syntax the program is in before reading it, because during compilation of the material, we want to skip programs which use an unsupportable syntax quickly, without loading the program first.
The current supported syntaxes are:
This is one of the DirectX vertex shader assembler syntaxes.
Supported on cards from: ATI Radeon 8500, nVidia GeForce 3
Another one of the DirectX vertex shader assembler syntaxes.
Supported on cards from: ATI Radeon 9600, nVidia GeForce FX 5 series
Another one of the DirectX vertex shader assembler syntaxes.
Supported on cards from: ATI Radeon X series, nVidia GeForce FX 6 series
Another one of the DirectX vertex shader assembler syntaxes.
Supported on cards from: ATI Radeon HD 2000+, nVidia GeForce FX 6 series
This is the OpenGL standard assembler format for vertex programs. It’s roughly equivalent to DirectX vs_1_1.
This is an nVidia-specific OpenGL vertex shader syntax which is a superset of vs 1.1. ATI Radeon HD 2000+ also supports it.
Another nVidia-specific OpenGL vertex shader syntax. It is a superset of vs 2.0, which is supported on nVidia GeForce FX 5 series and higher. ATI Radeon HD 2000+ also supports it.
Another nVidia-specific OpenGL vertex shader syntax. It is a superset of vs 3.0, which is supported on nVidia GeForce FX 6 series and higher.
DirectX pixel shader (i.e. fragment program) assembler syntax.
Supported on cards from: ATI Radeon 8500, nVidia GeForce 3
DirectX pixel shader (i.e. fragment program) assembler syntax.
Supported on cards from: ATI Radeon 8500, nVidia GeForce FX 5 series
DirectX pixel shader (i.e. fragment program) assembler syntax.
Supported cards: ATI Radeon 9600, nVidia GeForce FX 5 series
DirectX pixel shader (i.e. fragment program) assembler syntax. This is basically ps_2_0 with a higher number of instructions.
Supported cards: ATI Radeon X series, nVidia GeForce FX 6 series
DirectX pixel shader (i.e. fragment program) assembler syntax.
Supported cards: ATI Radeon HD 2000+, nVidia GeForce FX6 series
DirectX pixel shader (i.e. fragment program) assembler syntax.
Supported cards: nVidia GeForce FX7 series
This is the OpenGL standard assembler format for fragment programs. It’s roughly equivalent to ps_2_0, which means that not all cards that support basic pixel shaders under DirectX support arbfp1 (for example neither the GeForce3 or GeForce4 support arbfp1, but they do support ps_1_1).
This is an nVidia-specific OpenGL fragment syntax which is a superset of ps 1.3. It allows you to use the ’nvparse’ format for basic fragment programs. It actually uses NV_texture_shader and NV_register_combiners to provide functionality equivalent to DirectX’s ps_1_1 under GL, but only for nVidia cards. However, since ATI cards adopted arbfp1 a little earlier than nVidia, it is mainly nVidia cards like the GeForce3 and GeForce4 that this will be useful for. You can find more information about nvparse at http://developer.nvidia.com/object/nvparse.html.
Another nVidia-specific OpenGL fragment shader syntax. It is a superset of ps 2.0, which is supported on nVidia GeForce FX 5 series and higher. ATI Radeon HD 2000+ also supports it.
Another nVidia-specific OpenGL fragment shader syntax. It is a superset of ps 3.0, which is supported on nVidia GeForce FX 6 series and higher.
An nVidia-specific OpenGL geometry shader syntax.
Supported cards: nVidia GeForce FX8 series
OpenGL Shading Language for Embedded Systems. It is a variant of GLSL, streamlined for low power devices. Supported cards: PowerVR SGX series
You can get a definitive list of the syntaxes supported by the current card by calling GpuProgramManager::getSingleton().getSupportedSyntax().
Assembler shaders don’t have named constants (also called uniform parameters) because the language does not support them - however if you for example decided to precompile your shaders from a high-level language down to assembler for performance or obscurity, you might still want to use the named parameters. Well, you actually can - GpuNamedConstants which contains the named parameter mappings has a ’save’ method which you can use to write this data to disk, where you can reference it later using the manual_named_constants directive inside your assembler program declaration, e.g.
In this case myVertexProgram.constants has been created by calling highLevelGpuProgram->getNamedConstants().save("myVertexProgram.constants"); sometime earlier as preparation, from the original high-level program. Once you’ve used this directive, you can use named parameters here even though the assembler program itself has no knowledge of them.
While defining a vertex, geometry or fragment program, you can also specify the default parameters to be used for materials which use it, unless they specifically override them. You do this by including a nested ’default_params’ section, like so:
The syntax of the parameter definition is exactly the same as when you define parameters when using programs, See Parameter specification. Defining default parameters allows you to avoid rebinding common parameters repeatedly (clearly in the above example, all but ’shininess’ are unlikely to change between uses of the program) which makes your material declarations shorter.
Often, not every parameter you want to pass to a shader is unique to that program, and perhaps you want to give the same value to a number of different programs, and a number of different materials using that program. Shared parameter sets allow you to define a ’holding area’ for shared parameters that can then be referenced when you need them in particular shaders, while keeping the definition of that value in one place. To define a set of shared parameters, you do this:
As you can see, you need to use the keyword ’shared_params’ and follow it with the name that you will use to identify these shared parameters. Inside the curly braces, you can define one parameter per line, in a way which is very similar to the param_named syntax. The definition of these lines is:
param_name | must be unique within the set |
param_type | can be any one of float, float2, float3, float4, int, int2, int3, int4, matrix2x2, matrix2x3, matrix2x4, matrix3x2, matrix3x3, matrix3x4, matrix4x2, matrix4x3 and matrix4x4. |
array_size | allows you to define arrays of param_type should you wish, and if present must be a number enclosed in square brackets (and note, must be separated from the param_type with whitespace). |
initial_values | If you wish, you can also initialise the parameters by providing a list of values. |
Once you have defined the shared parameters, you can reference them inside default_params and params blocks using shared_params_ref. You can also obtain a reference to them in your code via GpuProgramManager::getSharedParameters, and update the values for all instances using them.
If a new technique or pass needs to be added to a copied material then use a unique name for the technique or pass that does not exist in the parent material. Using an index for the name that is one greater than the last index in the parent will do the same thing. The new technique/pass will be added to the end of the techniques/passes copied from the parent material.
A specific texture unit state (TUS) can be given a unique name within a pass of a material so that it can be identified later in cloned materials that need to override specified texture unit states in the pass without declaring previous texture units. Using a unique name for a Texture unit in a pass of a cloned material adds a new texture unit at the end of the texture unit list for the pass.
Texture aliases are useful for when only the textures used in texture units need to be specified for a cloned material. In the source material i.e. the original material to be cloned, each texture unit can be given a texture alias name. The cloned material in the script can then specify what textures should be used for each texture alias. Note that texture aliases are a more specific version of Script Variables which can be used to easily set other values.
Using texture aliases within texture units:
texture_alias defaults to DiffuseTex.
Example: The base material to be cloned:
Note that the GLSL and HLSL techniques use the same textures. For each texture usage type a texture alias is given that describes what the texture is used for. So the first texture unit in the GLSL technique has the same alias as the TUS in the HLSL technique since its the same texture used. Same goes for the second and third texture units.
For demonstration purposes, the GLSL technique makes use of texture_unit naming and therefore the texture_alias name does not have to be set since it defaults to the texture unit name. So why not use the default all the time since its less typing? For most situations you can. Its when you clone a material that and then want to change the alias that you must use the texture_alias command in the script. You cannot change the name of a texture_unit in a cloned material so texture_alias provides a facility to assign an alias name.
Now we want to clone the material but only want to change the textures used. We could copy and paste the whole material but if we decide to change the base material later then we also have to update the copied material in the script. With set_texture_alias, copying a material is very easy now. set_texture_alias is specified at the top of the material definition. All techniques using the specified texture alias will be effected by set_texture_alias.
The textures in both techniques in the child material will automatically get replaced with the new ones we want to use.
The same process can be done in code as long you set up the texture alias names so then there is no need to traverse technique/pass/TUS to change a texture. You just call myMaterialPtr->applyTextureAliases(myAliasTextureNameList) which will update all textures in all texture units that match the alias names in the map container reference you passed as a parameter.
You don’t have to supply all the textures in the copied material.
Material fxTest2 only changes the diffuse and spec maps of material fxTest and uses the same normal map.
Another example:
fxTest3 will end up with the default textures for the normal map and spec map setup in TSNormalSpecMapping material but will have a different diffuse map. So your base material can define the default textures to use and then the child materials can override specific textures.