OGRE
1.10.12
Object-Oriented Graphics Rendering Engine
|
This component is used to generate shaders on the fly based on object material properties, scene setup and other user definitions.
The RTSS is not another Uber shader with an exploding amount of #ifdefs
that make it increasingly difficult to add new functionality. Instead, it manages a set of opaque isolated components (SubRenderStates) where each implements a specific effect. These "effects" include Fixed Function transformation and lighting. At the core these components are plain shader files providing a set of functions; e.g. FFP_FUNC_LIGHT_DIRECTIONAL_DIFFUSE, FFP_FUNC_LIGHT_POINT_DIFFUSE.
Correctly ordering these functions, providing them with the right input values and interconnecting them is the main purpose of the RTSS.
To this end the RTSS defines a set of stages; e.g Ogre::RTShader::FFP_TRANSFORM, Ogre::RTShader::FFP_TEXTURING. It then queries all registered SubRenderStates which in turn attach functions given a Ogre::Pass. The stages are conceptually very similar to render queue groups.
After the RTSS has queried the SubRenderStates it continues to fill the entry function (e.g. main()
for GLSL) by generating the actual function invocations.
Basically it performs the following (simplified) transformation, given
and $FFP_VS_TRANSFORM = [FFP_FUNC_TRANSFORM]
, $FFP_VS_TEXTURING = [FFP_FUNC_TRANSFORM_TEXCOORD]
, it generates
It will automatically use Ogre::GpuProgramParameters::AutoConstantType as needed to obtain the required inputs and route them in the respective functions. In the above example no local parameters were allocated, but the RTSS will do it as needed. (for instance if you try to write to "vertex" in GLSL)
Now that you know what the RTSS does, you are probably wondering how to change which functions are emitted per stage to, lets say, change the lighting from the FFP style per-vertex lighting to per-pixel lighting.
The RTSS is flexible enough to "just" move the according calculations from the vertex shader to the pixel shader.
The first option is to globally enforce per-pixel lighting, you can do the following
any non FFP SRS will automatically override the default SRS for the same stage. Ogre::RTShader::FFP_LIGHTING in this case.
Alternatively you can enable per-pixel lighting for one material only, by adding a rtshader_system
section to the pass as following
for more examples see Samples/Media/RTShaderLib/materials/RTShaderSystem.material
.
Here are the attributes you can use in a rtshader_system
section of a .material script:
Force a specific lighting model.
Format1: lighting_stage <ffp|per_pixel>
Format2: lighting_stage normal_map <texturename> [tangent_space|object_space] [coordinateIndex] [none|bilinear|trilinear|anisotropic] [max_anisotropy] [mipmap_bias]
Example: lighting_stage normal_map Panels_Normal_Tangent.png tangent_space 0 bilinear 1 -1.0
Override dynamic light count. Allows to customize which lights the RTSS will consider.
Format: light_count <pointLights> <directionalLights> <spotLights>
Force triplanar texturing
Format: triplanarTexturing <textureScale> <plateauSize> <transitionSpeed> <textureFromX> <textureFromY> <textureFromZ>
Example: triplanarTexturing 0.05 0.2 4.0 BumpyMetal.jpg egyptrockyfull.jpg MtlPlat2.jpg
textureScale | texture coordinates are multiplied by this. |
plateauSize | plateau on which small components of the normal have no influence. |
transitionSpeed | transitions speed between the three textures Valid values are [0; 0.57] not bigger to avoid division by zero |
textureFromX | Texture for the x-direction planar mapping |
textureFromY | Texture for the y-direction planar mapping |
textureFromZ | Texture for the z-direction planar mapping |
Integrated PSSM shadow receiver with 3 splits. Custom split points.
Format: integrated_pssm4 <sp0> <sp1> <sp2> <sp3>
Apply photoshop-like blend effects to texture layers
Format: layered_blend <effect>
Example: layered_blend luminosity
effect | one of default, normal, lighten, darken, multiply, average, add, subtract, difference, negation, exclusion, screen, overlay, hard_light, soft_light, color_dodge, color_burn, linear_dodge, linear_burn, linear_light, vivid_light, pin_light, hard_mix, reflect, glow, phoenix, saturation, color, luminosity |
Apply custom modulate effect to texture layer
Format: source_modifier <operation> custom <parameterNum>
Example: source_modifier src1_inverse_modulate custom 2
operation | one of src1_modulate, src2_modulate, src1_inverse_modulate, src2_inverse_modulate |
parameterNum | number of the custom shader parameter that controls the operation |
When the user asks the system to generate shaders for a given technique it has to provide the system a name for the target technique scheme. The system in turn, then creates a new technique based on the source technique but with a different scheme name. Note: In order to avoid clashes the source technique must NOT contain any shaders otherwise this step will fail.
The idea behind this concept is to use Ogre's built in mechanism of material schemes, so all the user has to do in order to use the new technique is to change the material scheme of his viewport(s).
Before each viewport update, the system performs a validation step of all associated shader based techniques it created. This step includes automatic synchronization with the scene lights and fog states. When the system detects that a scheme is out of date it generates the appropriate shaders for each technique new.
The following steps are executed in order to generate shaders for a given technique:
Initializing the system is composed of the following steps:
Ogre::RTShader::ShaderGenerator::initialize()
method.Ogre::ResourceGroupManager::addResourceLocation()
method.This step will associate the given technique with a destination shader generated based technique. Calling the Ogre::RTShader::ShaderGenerator::createShaderBasedTechnique()
will cause the system to generate internal data structures associated with the source technique and will add new technique to the source material. This new technique will have the scheme name that was passed as an argument to this method and all its passes will contain shaders that the system will generate and update during the application runtime.
To use the generated technique set the change material scheme of your viewport(s) to the same scheme name you passed as argument to this method.
Note that you can automate the shader generation process for all materials. First set the viewport scheme to the destination scheme of the RTSS shaders. Second register to the Ogre::MaterialManager::Listener
and implement the handleSchemeNotFound()
function. If the function requests a scheme for the RTSS, generate it based on functions parameters.
During the application runtime the ShaderGenerator instance receives notifications on per frame basis from its target SceneManager. At this point it checks the material scheme in use. In case the current scheme has representations in the manager, it executes its validate method. The SGScheme validation includes synchronization with scene light and fog settings. In case it is out of date it will rebuild all shader generated techniques. The first step is to loop over every SGTechnique associated with this SGScheme and build its RenderStates - one for each pass. Each RenderState has its own hash code and it is cached at the ShaderGenerator. The same RenderState can be shared by multiple SGPasses. The second step is to loop again on every SGTechnique and acquire a program set for each SGPass. The actual acquiring process is done by the ProgramManager that generates CPU program representation, send them to a matching ProgramWriter that is chosen by the active target language, the writer generates source code that is the basis for the GPU programs. The result of this entire process is that each technique associated with the SGScheme has vertex and pixel shaders applied to all its passes. These shaders are synchronized with scene lights and fog settings.
The following is an partial list of components within the RTSS. These components are listed as they have great importance in understanding controlling and later extending the RTSS system.
The ShaderGenerator is the main interface to the RTSS system. Through it you can request to generate and destroy the shaders, influence from what parts to create the shaders, and control general system settings such as the shading language and shader caching.
A render state describes the different components that a shader will be created from. These components are referred to as SubRenderStates.
RenderStates exist on 2 levels: scheme and pass. Scheme RenderStates describe the SubRenderStates that will be used when creating a shader for a given material scheme. Pass RenderState describe the SubRenderStates that will be used when creating a specific pass of a specific material. When a shader is generated for a given material the system combines the SubRenderStates from both RenderStates to create a shader specific for a material pass in a specific scheme.
Sub-render states (SRS) are components designed to generate the code of the RTSS shaders. Each SRS usually has a specific role to fill within the shader's construction. These components can be combined in different combinations to create shaders with different capabilities.
There are 5 basic SRSs. These are used to recreate the functionality provided by the fixed pipeline and are added by default to every scheme RenderState:
There are many more sub render states that already exist in the Ogre system and new ones can be added. Some of the existing SRSs include capabilities such as: per-pixel lighting, texture atlas, advanced texture blend, bump mapping, efficient multiple lights (sample), textured fog (sample), etc...
As the name suggests, sub render state factories are factories that produce sub render states. Each factory generates a specific SRS.
These type of components are note worthy for 2 reason. The first and obvious one is that they allow the system to generate new SRSs for the materials it is asked to generate. The second reason is that they perform as script readers and writers allowing the system to create specific or specialized SRSs per material.
Although the system implements some common shader based effects such as per pixel lighting, normal map, etc., you may find it useful to write your own shader extensions.
In order to extend the system with your own shader effects you'll have to follow these steps:
Implementing the SubRenderState requires overriding the pure methods of the base class.
The SubRenderState supply default implementation for this method which break down this method into three stages:
Note:
This method also let the SubRenderState to opportunity to modify the destination pass. I.E the NormalMapLighting instance adds the normal map texture unit in this context.
Implementing the Ogre::RTShader::SubRenderStateFactory is much simpler and involves implementing the following methods
A couple of notes on debugging shaders coming from the RTSS:
{Ogre directory}\Samples\Media\RTShaderLib\cache
. This is important for 2 reasons:pGpuProgram.setNull();
(in createGpuProgram). If a shader will fail to compile it will usually fail there. Once that happens you can find the shader name under the programName
parameter, then look for it in the cache directory you created.When the early graphic cards came into the market they contained a fixed but large set of functions with which you could influence how 3D object were rendered. These included influencing object positions using matrices, calculating the effect of textures on a pixel, calculating the effect of lights on vertices and so on. These set of functions and their implementation in hardware became later known as the graphic card fixed pipeline (or Fixed Function Pipeline).
As graphic cards became more powerful and graphic application became more complex, a need for new ways to manipulate the rendering of 3D models became apparent. This need saw the introduction of shaders.
Shaders are small custom made programs that run directly on the graphics card. Using these programs, one could replace the calculations that were made by the fixed pipeline and add new functionality. However there was a catch: If shaders are used on an object, the object can no longer use any of the functionality of the fixed pipeline. Any calculation that was used in the fixed pipeline needed to be recreated in the shaders. With early graphics applications this was not problematic. Shaders were simple and their numbers were kept low. However as applications grew in complexity this meant that the need for shaders grew as well. As a programmer you were left with 2 choices, both bad. Either create an exuberant amount of small shaders that soon became too many to effectively maintain. Or create an uber shader, a huge complex shader, that soon became too complex to effectively maintain as well.
The RTSS seeks to fix those problems by automatically generating shaders based on the operations previously required from the fixed pipeline and new capabilities required by the user.
With the introduction of the version 11 of Direct3D, a new reason for having an RTSS like system became apparent. With D3D11 support for fixed pipeline functionality was removed. Meaning, you can only render objects using shaders. The RTSS becomes an excellent tool for this purpose.
Writing shading programs became a very common task when developing 3D based application during the last couple of years. Most of the visual effects used by 3D based applications involve shader programs. Here is just a short list of some common effects using shaders
Writing shaders by hand is in many cases the best solution as one has full control of the shader code and hence optimizations based on the target scene nature can be made, etc.
So why use a runtime shader system anyway?