[Translation] A brief history of 3D texturing in games

[Translation] A brief history of 3D texturing in games


In this post I will tell you about the history of texturing in three-dimensional video games. Since the first appearance of 3D real-time on home consoles, we have come a long way, but even today, when creating game textures, some practices are used, dating back to those early years.

image

First, let's talk a little about the basics - the differences in real-time rendering (real time rendering) from pre-rendered scenes. Real-time rendering is used in most 3D games. The machine in this case draws the image in real time. To create a single frame of the pre-rendered scene, more computational power is required.

image

Because of this, we get different levels of quality. Real-time rendering is needed for games for interactivity. Static elements such as cinematic inserts or fixed backgrounds can be created by pre-rendering. The difference of results was huge. Here is the pre-rendered background and real-time character from the 1999 game:

image>/div>
image

Pre-rendering allowed to create a bunch of expensive scenes in rendering, which could take hours or even days to render a single frame. For a picture or movie, this is quite normal. But games need to constantly render 30-60 frames per second. Therefore, in the early 3D games we had to make big simplifications.

image

On 16-bit consoles, Star Fox was one of the first examples of 3D real-time, but they also included Donkey Kong Country, in which the pre-rendered three-dimensional graphics were transformed into sprites (with heavily simplified color palettes). For a long time, nothing else could look as good in real time.


When we switched to consoles with true 3D (such as the N64 and PS1), we finally saw what real-time rendering was not capable of. You cannot use light sources to bake shadows or lighting into the scene, the materials do not react to light, there is no “bump mapping”, only low-resolution geometry and textures. How did the artists manage to cope with this?

image

For example, lighting information was either drawn on textures (shadows, highlights, depth), or drawn on each vertex of a triangle, or both approaches were used. The shadows of the characters were usually simple textures that followed the character. It was impossible to cast the correct shadows.


It was possible to achieve the simplest shading on models, but he usually lacked the correct information about the lighting. In games such as Ocarina of Time and Crash Bandicoot, a lot of information about the lighting was used, which was recorded in textures and drawing on the vertices of the geometry.This allowed us to make different areas brighter, darker, or give them a certain shade.

image

image

In those days, overcoming such limitations required a large amount of creative work. In varying degrees, drawing or writing lighting information to textures is used today. But as real-time rendering gets better, the need for such techniques is reduced.

image/div>
So, the next generation of "iron" had to solve many more problems. The next generation of consoles - PS2, Xbox and Gamecube - tried to cope with some of them. The first noticeable leap in quality was an increase in the resolution of textures and improved lighting.

image

One of the most important games in this regard was Silent Hill 2. The most serious breakthrough of this game in 2001 was the use of shadow casting in real time. This meant that some of the lighting information recorded in the texture could have been eliminated, but for the most part it was actively used in this generation.


The decisive factor for this and other games of that era was resolution. Thanks to the larger number of pixels, they could store much more micro components. But for now it was only information about color and ambient lighting. Bump maps and reflection maps were rarely used. It was impossible to get the right response to light from materials.

image

There was another reason for the popularity of baking information in textures. In the pre-rendered scenes this was not a problem, in them the clothes really looked like cloth, and glass, hair and skin seemed convincing. Real-time rendering required embossed texturing, and it appeared, but only towards the end of this generation (only on the xbox).

image

image

Reflection and normal maps appeared in games such as Halo 2 and Doom 3. Specular maps allowed surfaces to react to lighting much more naturally, for example, metal could really shine, and so on. The normal map allows you to record much more details that could not be achieved in objects with such a low number of polygons.

image/div>
image/div>
If you work with 3D, then you know what a normal map is. This is a type of embossed texturing that allows surfaces to respond to lighting much more detailed compared to the complexity of the model.This is the most important texture that is used in almost every game released after this generation.

image

After the appearance of normal maps, artists' approach to texture creation has changed. To produce normal maps, you have to spend much more time creating a model. It has become the norm to use sculpturing tools such as Zbrush, which allow baking high-poly models into textures that can be used in low-poly objects.

image

Before the advent of this technology, most of the textures were either painted by hand, or created from photos in Photoshop. In the era of the Xbox 360 and PS3, this method is a thing of the past for many games, because along with the higher resolution, the quality of the models has also improved.

image

In addition, due to the pre-calculated shading, the behavior of materials has been greatly improved. For many artists, this was a turning point. Materials became much more complicated than before. This 2005 demo was superior to everything before it. At the time, there was not even an Xbox 360 yet.


Also, a new approach to lighting the scene - a model of Ambient occlusion. Real-time rendering again had to catch up with the prerender. AO is too expensive to render in real time, so artists just started writing it into textures! AO recreates indirect shadows from light sources that are too small for detailed display.

image/div>
Even today, real-time AO is not 100% achievable, but we are close to it! Thanks to such processes as SSAO and DFAO, the situation has greatly improved compared to what it was 10 years ago. Baked AO cards are still in use, but probably when renderers become better, they will be discarded.


To summarize: in the era of PS3 and X360, we saw an even larger jump in resolution compared to the previous generation, and new textures appeared for surfaces with shading. And, of course, the quality of lighting has improved! You could get real-time shadows for the whole scene or bake lighting to increase the detail.

image

It seems everything is just fine? But still there are flaws. Low resolution models and textures, plus high costs due to new shaders. And do not forget the resolution issued by the games. Only 720p! (In addition, fonts on CRT TVs have become a problem).

image

Another problem remained specular maps. At that time, each object had only one card of its “brilliance”. This is a big limitation. Materials looked unreal. Therefore, some developers began to share reflection maps. One of the first examples was the game Bioshock Infinite.

image

Reflection maps are now divided into types of materials (wood, gold, concrete, etc.) and "old age" (cracks, wear, etc.). This event coincided with the advent of a new type of shading model - Physically Based Rendering, PBR (physically correct rendering).

This leads us today and to the current generation. PBR has become a standard for many games. This technique was popularized by the studio Pixar, standardizing it as a way to create plausible materials in computer graphics. And it can be applied in real time!

image/div>
In addition, the industry has improved the conveyor, which appeared in the previous generation - screen effects. Aspects such as tonal correction and color correction have improved in the current generation. In the previous generation, this would take a long time to adjust the texture.

image/div>
If you are interested in learning more about old games and their rendering techniques, then I highly recommend the DF Retro series on digitalfoundry. The author did a fantastic job analyzing individual games, such as Silent Hill 2.


Just for comparison, I will show how the first 3D games looked and what work is needed today to create the only texture in the game.

image

image

image/div>
image

Let's also briefly mention some of the techniques developed in previous eras, which are applied now! There are many “stylized” textures that record information about the lighting. Blizzard uses this most actively.

image/div>
This company combines technical constraints with a thoughtful attitude to the schedule, and achieves amazing results. Perhaps they do not use the same pile of textures as in other AAA games, but the results can not be called bad.

image

And sometimes thanks to the PBR technique and hand-drawn/simplified textures you can go very far. This is facilitated by the presence of a modern engine with many functions.

image

Source text: [Translation] A brief history of 3D texturing in games