OGRE  1.11.1
Object-Oriented Graphics Rendering Engine
Manual mesh creation

There are two ways to create your own mesh. The first way is to create a Ogre::Mesh instance and provide it with the vertex and index buffers directly.

The second way is the high level Ogre::ManualObject interface. Instead of filling position and color buffers, you simply call the "position" and "colour" functions.

Using Manual Object

Building one-off geometry objects manually usually requires getting down and dirty with the vertex buffer and vertex declaration API, which some people find a steep learning curve. This class gives you a simpler interface specifically for the purpose of building a 3D object simply and quickly. Note that if you intend to instance your object you will still need to become familiar with the Mesh class.

This class draws heavily on the interface for OpenGL immediate-mode (glBegin, glVertex, glNormal etc), since this is generally well-liked by people. There are a couple of differences in the results though - internally this class still builds hardware buffers which can be re-used, so you can render the resulting object multiple times without re-issuing all the same commands again. Secondly, the rendering is not immediate, it is still queued just like all OGRE objects. This makes this object more efficient than the equivalent GL immediate-mode commands, so it's feasible to use it for large objects if you really want to.

To construct some geometry with this object:

  1. If you know roughly how many vertices (and indices, if you use them) you're going to submit, call estimateVertexCount() and estimateIndexCount(). This is not essential but will make the process more efficient by saving memory reallocations.
  2. Call begin() to begin entering data
  3. For each vertex, call position(), normal(), textureCoord(), colour() to define your vertex data. Note that each time you call position() you start a new vertex. Note that the first vertex defines the components of the vertex - you can't add more after that. For example if you didn't call normal() in the first vertex, you cannot call it in any others. You ought to call the same combination of methods per vertex.
  4. If you want to define triangles (or lines/points) by indexing into the vertex list, you can call index() as many times as you need to define them. If you don't do this, the class will assume you want triangles drawn directly as defined by the vertex list, i.e. non-indexed geometry. Note that stencil shadows are only supported on indexed geometry, and that indexed geometry is a little faster; so you should try to use it.
  5. Call end() to finish entering data.
  6. Optionally repeat the begin-end cycle if you want more geometry using different rendering operation types, or different materials After calling end(), the class will organise the data for that section internally and make it ready to render with. Like any other MovableObject you should attach the object to a SceneNode to make it visible. Other aspects like the relative render order can be controlled using standard MovableObject methods like setRenderQueueGroup.

You can also use beginUpdate() to alter the geometry later on if you wish. If you do this, you should call setDynamic(true) before your first call to begin(), and also consider using estimateVertexCount() / estimateIndexCount() if your geometry is going to be growing, to avoid buffer recreation during growth.

like all OGRE geometry, triangles should be specified in anti-clockwise winding order (whether you're doing it with just vertices, or using indexes too). That is to say that the front of the face is the one where the vertices are listed in anti-clockwise order.


We will use the ManualObject to create a single textured plane. After creating the object, we start a new geometry block that will use the given material

Ogre::ManualObject* man = mSceneMgr->createManualObject("test");
man->begin("Examples/OgreLogo", Ogre::RenderOperation::OT_TRIANGLE_LIST);

Next we specify the vertices of the plane

man->position(-20, 20, 20);
man->normal(0, 0, 1);
man->textureCoord(0, 0);
man->position(-20, -20, 20);
man->normal(0, 0, 1);
man->textureCoord(0, 1);
man->position(20, -20, 20);
man->normal(0, 0, 1);
man->textureCoord(1, 1);
man->position(20, 20, 20);
man->normal(0, 0, 1);
man->textureCoord(1, 0);

Now we can define the face. Ogre will split the quad into triangles for us.

man->quad(0, 1, 2, 3);

Calling end() creates the actual Hardware Buffers to be used for rendering and we can attach the Object to a Ogre::SceneNode.


In case you need multiple Ogre::Entities of the plane, you should call Ogre::ManualObject::convertToMesh first and then use Ogre::SceneManager::createEntity as usual.

Using vertex and index buffers directly

This time we are going to create a plane using the lower level Ogre::HardwareBuffer primitives.

We start by creating a Mesh object. As this is a manual Mesh, we have to set the bounds of it explicitly.

using namespace Ogre;
MeshPtr mesh = MeshManager::getSingleton().createManual(yourMeshName, "General");
mesh->_setBounds(AxisAlignedBox({-100,-100,0}, {100,100,0});

Next we define what should end up in our vertex and index buffer. We will store all data interleaved in one buffer. This typically has some advantages due to cache coherency and also is what ManualObject does automatically for us.

float vertices[32] = {
-100, -100, 0, // pos
0,0,1, // normal
0,1, // texcoord
100, -100, 0,
100, 100, 0,
-100, 100, 0 ,
uint16 faces[6] = {0,1,2,
0,2,3 };

However we could also split the data into multiple buffers with lower precision to save some bytes on texture coordinates and normals.

To describe the vertex sources, we have to create a Ogre::VertexData object. Notably it stores how many vertices we have.

mesh->sharedVertexData = new VertexData();
VertexDeclaration* decl = mesh->sharedVertexData->vertexDeclaration;
VertexBufferBinding* bind = mesh->sharedVertexData->vertexBufferBinding;

The actual description of our vertex buffer however is stored inside the Ogre::VertexDeclaration.

size_t offset = 0;
decl->addElement(0, offset, VET_FLOAT3, VES_POSITION);
offset += VertexElement::getTypeSize(VET_FLOAT3);
decl->addElement(0, offset, VET_FLOAT3, VES_NORMAL);
offset += VertexElement::getTypeSize(VET_FLOAT3);
decl->addElement(0, offset, VET_FLOAT2, VES_TEXTURE_COORDINATES, 0);
offset += VertexElement::getTypeSize(VET_FLOAT2);

Now we can continue to create the Hardware Buffers and upload our data.

offset, 4, HardwareBuffer::HBU_STATIC_WRITE_ONLY);
vbuf->writeData(0, vbuf->getSizeInBytes(), vertices, true);
bind->setBinding(0, vbuf);
HardwareIndexBufferSharedPtr ibuf = HardwareBufferManager::getSingleton().createIndexBuffer(
HardwareIndexBuffer::IT_16BIT, 6, HardwareBuffer::HBU_STATIC_WRITE_ONLY);
ibuf->writeData(0, ibuf->getSizeInBytes(), faces, true);

Note how we used the symbolical constant 0 to link the Ogre::HardwareVertexBuffer to the Ogre::VertexDeclaration. This allows the underlying RenderSystem to swap VertexBuffers without changing the VertexDeclaration. i.e. render different Meshes that share the same vertex layout, without changing the state.

Finally we create the Ogre::SubMesh that will be ultimately rendered.

SubMesh* sub = mesh->createSubMesh();
sub->useSharedVertices = true;
sub->indexData->indexBuffer = ibuf;
sub->indexData->indexCount = 6;
sub->indexData->indexStart = 0;

Note that while our VertexBuffer is shared, the IndexBuffer is not. This allows rendering different faces of the same object using different Materials. Here each SubMesh links the faces (IndexBuffer) to the according material.