OGRE  2.2.4
Object-Oriented Graphics Rendering Engine
Ogre::BufferPacked Class Referenceabstract

#include <OgreBufferPacked.h>

+ Inheritance diagram for Ogre::BufferPacked:

Public Member Functions

 BufferPacked (size_t internalBufferStartBytes, size_t numElements, uint32 bytesPerElement, uint32 numElementsPadding, BufferType bufferType, void *initialData, bool keepAsShadow, VaoManager *vaoManager, BufferInterface *bufferInterface)
 Generic constructor. More...
 
virtual ~BufferPacked ()
 
size_t _getFinalBufferStart (void) const
 
size_t _getInternalBufferStart (void) const
 
size_t _getInternalNumElements (void) const
 
size_t _getInternalTotalSizeBytes (void) const
 
void _setBufferInterface (BufferInterface *bufferInterface)
 For internal use. More...
 
void _setShadowCopy (void *copy)
 This will not delete the existing shadow copy so it can be used for other purposes if it is not needed call OGRE_FREE_SIMD( m->getShadowCopy(), MEMCATEGORY_GEOMETRY ) before calling this function. More...
 
void advanceFrame (void)
 
void copyTo (BufferPacked *dstBuffer, size_t dstElemStart=0, size_t srcElemStart=0, size_t srcNumElems=std::numeric_limits< size_t >::max())
 Copies the contents of this buffer to another, using GPU -> GPU transfers. More...
 
BufferInterfacegetBufferInterface (void) const
 
virtual BufferPackedTypes getBufferPackedType (void) const =0
 Useful to query which one is the derived class. More...
 
BufferType getBufferType (void) const
 
size_t getBytesPerElement (void) const
 
MappingState getMappingState (void) const
 Returns the mapping state. More...
 
size_t getNumElements (void) const
 
const void * getShadowCopy (void) const
 
size_t getTotalSizeBytes (void) const
 
bool isCurrentlyMapped (void) const
 Returns whether the buffer is currently mapped. More...
 
void *RESTRICT_ALIAS_RETURN map (size_t elementStart, size_t elementCount, bool bAdvanceFrame=true)
 Maps the specified region to a pointer the CPU can access. More...
 
void operator delete (void *ptr)
 
void operator delete (void *ptr, void *)
 
void operator delete (void *ptr, const char *, int, const char *)
 
void operator delete[] (void *ptr)
 
void operator delete[] (void *ptr, const char *, int, const char *)
 
void * operator new (size_t sz, const char *file, int line, const char *func)
 operator new, with debug line info More...
 
void * operator new (size_t sz)
 
void * operator new (size_t sz, void *ptr)
 placement operator new More...
 
void * operator new[] (size_t sz, const char *file, int line, const char *func)
 array operator new, with debug line info More...
 
void * operator new[] (size_t sz)
 
AsyncTicketPtr readRequest (size_t elementStart, size_t elementCount)
 Async data read request. More...
 
void regressFrame (void)
 Performs the opposite of. More...
 
void unmap (UnmapOptions unmapOption, size_t flushStartElem=0, size_t flushSizeElem=0)
 Unmaps or flushes the region mapped with. More...
 
virtual void upload (const void *data, size_t elementStart, size_t elementCount)
 Sends the provided data to the GPU. More...
 

Friends

class BufferInterface
 
class D3D11BufferInterface
 
class D3D11BufferInterfaceBase
 
class D3D11CompatBufferInterface
 
class GL3PlusBufferInterface
 
class GLES2BufferInterface
 
class MetalBufferInterface
 
class NULLBufferInterface
 

Constructor & Destructor Documentation

◆ BufferPacked()

Ogre::BufferPacked::BufferPacked ( size_t  internalBufferStartBytes,
size_t  numElements,
uint32  bytesPerElement,
uint32  numElementsPadding,
BufferType  bufferType,
void *  initialData,
bool  keepAsShadow,
VaoManager vaoManager,
BufferInterface bufferInterface 
)

Generic constructor.

Parameters
initialDataInitial data to populate. If bufferType == BT_IMMUTABLE, can't be null.
keepAsShadowKeeps "intialData" as a shadow copy for reading from CPU without querying the GPU (can be useful for reconstructing buffers on device/context loss or for efficient reading of the data without streaming back from GPU.)

If keepAsShadow is false, caller is responsible for freeing the data

If keepAsShadow is true, we're responsible for freeing the pointer. We will free the pointer using OGRE_FREE_SIMD( MEMCATEGORY_GEOMETRY ), in which case the pointer must* have been allocated using OGRE_MALLOC_SIMD( MEMCATEGORY_GEOMETRY )

If the constructor throws, then data will NOT be freed, and caller will have to do it.

See also
FreeOnDestructor to help you with exceptions and correctly freeing the data.

Must be false if bufferType >= BT_DYNAMIC

◆ ~BufferPacked()

virtual Ogre::BufferPacked::~BufferPacked ( )
virtual

Member Function Documentation

◆ _getFinalBufferStart()

size_t Ogre::BufferPacked::_getFinalBufferStart ( void  ) const
inline

◆ _getInternalBufferStart()

size_t Ogre::BufferPacked::_getInternalBufferStart ( void  ) const
inline

◆ _getInternalNumElements()

size_t Ogre::BufferPacked::_getInternalNumElements ( void  ) const
inline

◆ _getInternalTotalSizeBytes()

size_t Ogre::BufferPacked::_getInternalTotalSizeBytes ( void  ) const
inline

◆ _setBufferInterface()

void Ogre::BufferPacked::_setBufferInterface ( BufferInterface bufferInterface)

For internal use.

◆ _setShadowCopy()

void Ogre::BufferPacked::_setShadowCopy ( void *  copy)

This will not delete the existing shadow copy so it can be used for other purposes if it is not needed call OGRE_FREE_SIMD( m->getShadowCopy(), MEMCATEGORY_GEOMETRY ) before calling this function.

This will also not automatically upload the shadow data to the GPU. The user must call upload or use a staging buffer themselves to achieve this.

◆ advanceFrame()

void Ogre::BufferPacked::advanceFrame ( void  )
See also
map. Do NOT call this function more than once per frame, or if you've called map( advanceFrame = true )

◆ copyTo()

void Ogre::BufferPacked::copyTo ( BufferPacked dstBuffer,
size_t  dstElemStart = 0,
size_t  srcElemStart = 0,
size_t  srcNumElems = std::numeric_limits< size_t >::max() 
)

Copies the contents of this buffer to another, using GPU -> GPU transfers.

In simple terms it is similar to doing: memcpy( dstBuffer + dstElemStart, this + srcElemStart, srcNumElems );

Remarks
When both src and dst have different values for BufferPacked::getBytesPerElement() then srcNumElems * this->getBytesPerElement() must be divisible by dstBuffer->getBytesPerElement()

If dst has a shadow buffer, then src must have it too.

Parameters
dstBufferBuffer to copy to. Must be of type BT_DEFAULT
dstElemStartThe offset for dstBuffer. It must be in the unit of measure of dstBuffer. e.g. actual offset in bytes is dstElemStart * dstBuffer->getBytesPerElement()
srcElemStartThe offset of this buffer to start from
srcNumElemsThe number of elements to copy, in units of measure of srcBuffer. When this value is out of bounds, it gets clamped. See remarks.

◆ getBufferInterface()

BufferInterface* Ogre::BufferPacked::getBufferInterface ( void  ) const
inline

◆ getBufferPackedType()

virtual BufferPackedTypes Ogre::BufferPacked::getBufferPackedType ( void  ) const
pure virtual

◆ getBufferType()

BufferType Ogre::BufferPacked::getBufferType ( void  ) const
inline

◆ getBytesPerElement()

size_t Ogre::BufferPacked::getBytesPerElement ( void  ) const
inline

◆ getMappingState()

MappingState Ogre::BufferPacked::getMappingState ( void  ) const
inline

Returns the mapping state.

Note that if you call map with MS_PERSISTENT_INCOHERENT or MS_PERSISTENT_COHERENT, then call unmap( UO_KEEP_PERSISTENT ); the returned value will still be MS_PERSISTENT_INCOHERENT/_COHERENT when persistent mapping is supported. This differs from isCurrentlyMapped

◆ getNumElements()

size_t Ogre::BufferPacked::getNumElements ( void  ) const
inline

◆ getShadowCopy()

const void* Ogre::BufferPacked::getShadowCopy ( void  ) const
inline

◆ getTotalSizeBytes()

size_t Ogre::BufferPacked::getTotalSizeBytes ( void  ) const
inline

◆ isCurrentlyMapped()

bool Ogre::BufferPacked::isCurrentlyMapped ( void  ) const

Returns whether the buffer is currently mapped.

If you've persistently mapped the buffer and then called unmap( UO_KEEP_PERSISTENT ); this function will return false; which differs from getMappingState's behavior.

◆ map()

void* RESTRICT_ALIAS_RETURN Ogre::BufferPacked::map ( size_t  elementStart,
size_t  elementCount,
bool  bAdvanceFrame = true 
)

Maps the specified region to a pointer the CPU can access.

Only dynamic buffers can use this function. The region [elementStart; elementStart + elementCount) will be mapped.

Remarks
You can only map once per frame, regardless of parameters (except for advanceFrame). map( 0, 1 ) followed by map( 1, 1 ); is invalid. If you plan modifying elements 0 and 1; you should call map( 0, 2 )
Note that even if you use persistent mapping, you still need to call
See also
unmap.
Parameters
elementStartStart of the region to be mapped, in elements. Normally you want this to be 0.
elementCountLength of the region to map, in elements.
See also
getNumElements to map the whole range. Can't be 0.
Parameters
bAdvanceFrameWhen true, the Buffer will be usable after unmapping it (or earlier if persistent mapped). However you won't be able to call map() again until the next frame. Calling this with false allows to call map multiple times. However ater calling unmap, you must call advanceFrame. THIS IS ONLY FOR VERY ADVANCED USERS.

◆ operator delete() [1/3]

template<class Alloc >
void Ogre::AllocatedObject< Alloc >::operator delete ( void *  ptr)
inlineinherited

◆ operator delete() [2/3]

template<class Alloc >
void Ogre::AllocatedObject< Alloc >::operator delete ( void *  ptr,
void *   
)
inlineinherited

◆ operator delete() [3/3]

template<class Alloc >
void Ogre::AllocatedObject< Alloc >::operator delete ( void *  ptr,
const char *  ,
int  ,
const char *   
)
inlineinherited

◆ operator delete[]() [1/2]

template<class Alloc >
void Ogre::AllocatedObject< Alloc >::operator delete[] ( void *  ptr)
inlineinherited

◆ operator delete[]() [2/2]

template<class Alloc >
void Ogre::AllocatedObject< Alloc >::operator delete[] ( void *  ptr,
const char *  ,
int  ,
const char *   
)
inlineinherited

◆ operator new() [1/3]

template<class Alloc >
void* Ogre::AllocatedObject< Alloc >::operator new ( size_t  sz,
const char *  file,
int  line,
const char *  func 
)
inlineinherited

operator new, with debug line info

◆ operator new() [2/3]

template<class Alloc >
void* Ogre::AllocatedObject< Alloc >::operator new ( size_t  sz)
inlineinherited

◆ operator new() [3/3]

template<class Alloc >
void* Ogre::AllocatedObject< Alloc >::operator new ( size_t  sz,
void *  ptr 
)
inlineinherited

placement operator new

◆ operator new[]() [1/2]

template<class Alloc >
void* Ogre::AllocatedObject< Alloc >::operator new[] ( size_t  sz,
const char *  file,
int  line,
const char *  func 
)
inlineinherited

array operator new, with debug line info

◆ operator new[]() [2/2]

template<class Alloc >
void* Ogre::AllocatedObject< Alloc >::operator new[] ( size_t  sz)
inlineinherited

◆ readRequest()

AsyncTicketPtr Ogre::BufferPacked::readRequest ( size_t  elementStart,
size_t  elementCount 
)

Async data read request.

A ticket will be returned. Once the async transfer finishes, you can use the ticket to read the data from CPU. AsyncTicket

◆ regressFrame()

void Ogre::BufferPacked::regressFrame ( void  )

Performs the opposite of.

See also
advanceFrame. Only call this after having called advanceFrame. i.e. restore the buffer to the state it was before calling advanceFrame.

◆ unmap()

void Ogre::BufferPacked::unmap ( UnmapOptions  unmapOption,
size_t  flushStartElem = 0,
size_t  flushSizeElem = 0 
)

Unmaps or flushes the region mapped with.

See also
map. Alternatively, you can flush a smaller region (i.e. you didn't know which regions you were to update when mapping, but now that you're done, you know). The region being flushed is [flushStart; flushStart + flushSize)
Parameters
unmapOptionWhen using persistent mapping, UO_KEEP_PERSISTENT will keep the map alive; but you will have to call map again to use it. This requirement allows Ogre to:
  1. Synchronize if needed (avoid mapping a region that is still in use)
  2. Emulate persistent mapping on Hardware/Drivers that don't support it.
flushStartElemIn elements, 0-based index (based on the mapped region) on where to start flushing from. Default is 0.
flushSizeElemThe length of the flushing region, which can't be bigger than 'elementCount' passed to
See also
map. When this value is 0, we flush until the end of the buffer starting from flushStartElem

◆ upload()

virtual void Ogre::BufferPacked::upload ( const void *  data,
size_t  elementStart,
size_t  elementCount 
)
virtual

Sends the provided data to the GPU.

Parameters
dataThe data to transfer to the GPU. Caller is responsible for freeing the pointer. "data" starts at offset zero. i.e. dst[elementStart * mBytesPerElement] = data[0];
elementStartThe start region, usually zero.
elementCountSize, in number of elements, of data. Must be less than - elementStart

Friends And Related Function Documentation

◆ BufferInterface

friend class BufferInterface
friend

◆ D3D11BufferInterface

friend class D3D11BufferInterface
friend

◆ D3D11BufferInterfaceBase

friend class D3D11BufferInterfaceBase
friend

◆ D3D11CompatBufferInterface

friend class D3D11CompatBufferInterface
friend

◆ GL3PlusBufferInterface

friend class GL3PlusBufferInterface
friend

◆ GLES2BufferInterface

friend class GLES2BufferInterface
friend

◆ MetalBufferInterface

friend class MetalBufferInterface
friend

◆ NULLBufferInterface

friend class NULLBufferInterface
friend

The documentation for this class was generated from the following file: