StarSpace! Theory of Operation Part 1: Rendering

This is part 1 of a series of articles where I will explain how I plan for my game to work. As this is my first game, this will all likely change significantly before the game is finished.

StarSpace! will use recursive raytracing, where as most games use rasterization


Recursive raytracing and rasterization are both rendering technologies. Raytracing has several significant advantages over rasterization. These advantages include: Better, more realistic output, Lightened artistic load, Simpler to implement, And it scales better, both across more hardware, and with more ingame polygons, shadows, etc. However, it has one big disadvantage against rasterization, it is much slower. Sort of. It's slower right now.


Rasterization works by tracing from the corner of each polygon in a scene to a pixel on the screen. Using hidden line detection, pixel shaders, etc, the rest of the image gets filled out. Raytracing works somewhat opposite in that it starts at the camera and works back. Raytracing sends rays from each pixel and checks to see if any objects intersect the ray. If an object intersects the ray, that pixel of the object texture is returned. A recursive raytracer (which is what most are) will then send more rays from the point for reflection, refraction, and shadows. These rays are obviously sent at the angle of reflection, refraction, and towards the light source. Thus, raytracing is simpler, scales better (because each pixel is rendered completely separately of every other pixel), and more realistic as it better simulates the behavior of light (though, in reverse. This way you only render what you need to, rather than everything lit). It is slower because it has more lines of code that need to be run for every pixel. Even though raytracing actually uses fewer lines of code than rasterization, almost all of them need to be run for every pixel.


Rasterization scales linearly with more polygons. Raytracing scales logarithmically. So, if you get enough polygons, raytracing will actually be faster, and the gap will only widen from there. Plus, rasterization requires more and more hacks and addons to get more realistic, making it even slower. Raytracing starts out extremely realistic. Meaning few, if any hacks are required. Especially if you use pathtracing, An even more realistic variant of raytracing.



While we are still fairly low on the logarithmic curve, we are getting high enough so that real-time raytracing should be possible. To start out though, I will be creating a non-recursive raytracer. A raytracer that does not render shadows, reflections etc. This is for two reasons. Reason one: I am still acquiring and setting up hardware powerful enough to handle all the extra rays. Reason two: I don't have much experience in the area, and it will be easier to start with the simplest possible implementation, then add more features later. 


I am writing my raytracer in C, more on that in a bit. Once I have it working well, and I have all my hardware setup, I will rewrite it in OpenCL C so it can be run on powerful GPUs for better performance. 


One issue I am running into is that most tutorials and example raytracers I have found are written in C++. A somewhat small hurdle, but something I have to deal with nonetheless. Also, while I will be rendering the 3D worlds  into displayable "images" in C, it seems to me I will still need OpenGL, or other such graphics api to actually write to the display in real-time. Meaning I will also need to learn OpenGL. And, most OpenGL tutorials and instructional materials are also written in C++.



And that's where I am now on the rendering engine. Any help or suggestions is appreciated!



OpenComputeDesign

opencomputedesign@linuxrocks.online