Sunday, October 2, 2011

Writing a 2D Library On Top of OpenGL 3.x Practices

Having straightened out MyGL this weekend to behave far more transparently with the underlying OpenGL state machine (effectively created an object-oriented OpenGL 3.x filter) I now have to take on the challenge of actually writing easy-to-use code bases on top of it. The first issue that I'm going to tackle is two-dimensional rendering. I don't think OpenGL gives 2D enough love by default. You look at an API like DirectX and they've got a custom sprite rendering component that provides accelerated blitting operations right along side traditional 3D rendering techniques. Although it's closed-source I have to imagine that under hood this sprite rendering system is uses much the same hardware techniques as the rest of the API. There's no reason this can't be done with OpenGL, except that OpenGL chooses not to anything like it. My own opinion is that OpenGL is trying to be as low-level and "simple" as it can be, and as a result the idea of an additional layer of indirection from the hardware that a sprite-rendering system would provide doesn't quite feel right.

But seeing as there's nothing stopping it from being done, and it also appears to be such a useful system for 2D rendering, it sounds like a prime candidate for MyGL integration! As I've said before MyGL currently represents an object-oriented abstraction to the state machine that OpenGL runs on top of the video drivers. It exposes a good (and growing!) portion of the API's non-deprecated functions while preserving the native control-freak feel as much as I can manage. Even with this extra level of indirection rendering continues to be lightning fast thanks to the heavy influence I'm putting on shaders and appropriate caching. The new 2D API I am planning will utilize the classes and functions provided by MyGL to provide a more user-friendly approach to orthogonal rendering. Some of the features I have planned are:


  • Hardware Memory-Backed vertex data. All 2D primitives, even simple triangles and quads, will utilize hardware Vertex/Index buffers, and share the same whenever possible (a unit quad can live in video memory, while a per-object scaling matrix can be uploaded for each draw).
  • Dynamic Polygon Baking. Complex primitives can be created by pushing vertex data onto a stack prior to a one-time "bake" command being sent which constructs the Vertex/Index buffer(s) for the data. Allowing complex shapes as well as optimization to take place.
  • Scene Graph / Management. 2D primitives are inserted into a node graph and/or quadtree representing the orthogonal scene. This graph should be quickly queried for things like clipping/occlusion operations, depth sorting and "picking".

These are just some of the bigger ideas I have. I might add to the list above as I go. I'm a big fan of iterative design and development. So I expect these things to change as I go.

Update (January 24, 2012): School has consumed much of my attention recently and I've all but stopped working on this entirely for the time being. For my own records however I need to state that this system was finished, and I was somewhat satisfied with how it turned out. I discovered in the process that I knew nothing about scene management and that there were many more factors to consider than what, at face value, appeared to be a simple concept. I'll revisit that in the future when time allows, but for now I've got to focus on finishing my degree.

No comments:

Post a Comment