This is the eleventh of a now lengthy series of posts on the making of Crash Bandicoot. Click here for the PREVIOUS or for the BEGINNING of the whole mess.
The text below is another journal article I wrote on making Crash in 1999. This is the second part, the FIRST can be found here.
And finally to the point!
Both the rapid lifecycle of a video game console and the consistency of the hardware promote video game development strategies that are often very different from the strategies used to make PC video games. A side-effect of these strategies and the console development environment is that video games released later in the life of a console tend to be incrementally more impressive than earlier titles, even though the hardware hasn’t changed. Theoretically, since the hardware doesn’t change, first generation software can be as equally impressive as later generation titles, but in reality this is seldom the case. It may seem obvious that a developer should try to make a first generation title as impressive as a last generation title, but actually this strategy has been the downfall of many talented developers. There are many good and valid reasons why software improves over time, and the understanding and strategizing about these reasons can greatly improve the chances for a developer to be successful in the marketplace.
Difficulties of Console Video Game Development
There are many difficulties that are encountered when developing a console video game, but the following is a list of several major issues:
- Learning curve
- Hardware availability and reliability
- Bottlenecks
- Operating System / Libraries
- Development tools
- In-house tools
- Reuse of code
- Optimization
Learning curve
The learning curve may be the most obvious of all difficulties, and is often one of the most disruptive elements of a video game’s development schedule. In the past, video games were often developed by small groups of one or more people, had small budgets, ran in a small amount of memory, and had short schedules. The graphics were almost always 2D, and the mathematics of the game were rarely more than simple algebra. Today, video games have become much more complicated, and often require extremely sophisticated algorithms and mathematics. Also, the pure size of the data within a game has made both the run-time code and the tool pipeline require extremely sophisticated solutions for data management issues. Furthermore, 3D mathematics and renderings can be very CPU intensive, so new tricks and techniques are constantly being created. Also, the developer will often have to use complex commercial tools, such as 3D modeling packages, to generate the game’s graphics and data. Add into this the fact that Operating Systems, API’s, and hardware components are continually changing, and it should be obvious that just staying current with the latest technology requires an incredible amount of time, and can have a big impact on the schedule of a game.
The console video game developer has the additional burden that, unlike the PC where the hardware evolves more on a component or API level, new console hardware is normally drastically different and more powerful than the preceding hardware. The console developer has to learn many new things, such as new CPU’s, new operating systems, new libraries, new graphics devices, new audio devices, new peripherals, new storage devices, new DMA techniques, new co-processors, as well as various other hardware components. Also, the console developer usually has to learn a new development environment, including a new C compiler, a new assembler, a new debugger, and slew of new support tools. To complicate matters, new consoles normally have many bugs in such things as the hardware, the operating system, the software libraries, and in the various components of the development environment.
The learning curve of the console hardware is logarithmic in that it is very steep at first, but tends to drop off dramatically by the end of the console life-span. This initial steep learning curve is why often the first generation software isn’t usually as good as later software.
Hardware availability and reliability
Hardware isn’t very useful without software, and software takes a long time to develop, so it is important to hardware developers to try to encourage software developers to begin software development well in advance of the launch date of the hardware. It is not uncommon for developers to begin working on a title even before the hardware development kits are available. To do this, developers will start working on things that don’t depend on the hardware, such as some common tools, and they may also resort to emulating the hardware through software emulation. Obviously, this technique is not likely to produce software that maximizes the performance of the hardware, but it is done nevertheless because of the time constraints of finishing a product as close as possible to the launch of the console into the market. The finished first generation game’s performance is not going to be as good as later generations of games, but this compromise is deemed acceptable in order to achieve the desired schedule.
When the hardware does become available for developers, it is usually only available in limited quantity, is normally very expensive, and eventually ends up being replaced by cheaper and more reliable versions of the hardware at some later time. Early revisions of the hardware may not be fully functional, or may have components that run at a reduced speed, so are difficult to fully assess, and are quite scarce since the hardware developer doesn’t want to make very many of them. Even when more dependable hardware development kits becomes available, they are usually difficult to get, since production of these kits is slow and expensive, so quantities are low, and software developers are in competition to get them.
The development kits, especially the initial hardware, tend to have bugs that have to be worked around or avoided. Also, the hardware tends to have contact connection problems so that it is susceptible to vibrations, oxidation, and overheating. These problems generally improve with new revisions of the development hardware.
All of these reasons will contribute to both a significant initial learning curve, and a physical bottleneck of having an insufficient number of development kits. This will have a negative impact on a game’s schedule, and the quality of first generation software often suffers as a consequence.
Bottlenecks
An extremely important aspect to console game development is the analysis of the console’s bottlenecks, strengths, weaknesses, and overall performance. This is critical for developing high performance games, since each component of the console has a fixed theoretical maximum performance, and undershooting that performance may cause your game to appear under-powered, while overshooting may cause you to have to do major reworking of the game’s programming and/or design. Also, overshooting performance may cause the game to run at an undesirable frame rate, which could compromise the look and feel of the game.
The clever developer will try to design the game to exploit the strengths of the machine, and circumvent the weaknesses. To do this, the developer must be as familiar as possible with the limitations of the machine. First, the developer will look at the schematic of the hardware to find out the documented sizes, speeds, connections, caches, and transfer rates of the hardware. Next, the developer should do hands-on analysis of the machine to look for common weaknesses, such as: slow CPU’s, limited main memory, limited video memory, limited sound memory, slow BUS speeds, slow RAM access, small data caches, small instruction caches, small texture caches, slow storage devices, slow 3D math support, slow interrupt handling, slow game controller reading, slow system routines, and slow polygon rendering speeds. Some of these things are easy to analyze, such as the size of video memory, but some of these things are much trickier, such as polygon rendering speeds, because the speed will vary based on many factors, such as source size, destination size, texture bit depth, caching, translucency, and z-buffering, to name just a few. The developer will need to write several pieces of test code to study the performance of the various hardware components, and should not necessarily trust the statistics found in the documentation, since these are often wrong or misleading.
A developer should use a profiler to analyze where speed losses are occurring in the run-time code. Most programmers will spend time optimizing code because the programmer suspects that code is slow, but doesn’t have any empirical proof. This lack of empirical data means that the programmer will invariable waste a lot of time optimizing things that don’t really need to be optimized, and will not optimize things that would have greatly benefited from optimization. Unfortunately, a decent profiler is almost never included in the development software, so it is usually up to the individual developer to write his own profiling software.
The testing of performance is an extremely important tool to use in order to maximize performance. Often the reason why software improves between generations is that the developers slowly learn over time how to fully understand the bottlenecks, how to circumvent the bottlenecks, and how to identify what actually constitutes a bottleneck.
Operating system / Libraries
Although the consoles tend to have very small operating systems and libraries when compared to the operating systems found on the PC, they are still an important factor of console video game development.
Operating systems and support libraries on video game consoles are used to fill many needs. One such need is that the hardware developer will often attempt to save money on the production of console hardware by switching to cheaper components, or by integrating various components together. It is up to the operating system to enable these changes, while having the effects of these changes be transparent to both the consumer and the developer. The more that the operating system abstracts the hardware, the easier it is for the hardware developer to make changes to the hardware. However, remember that this abstraction of the hardware comes at the price of reduced potential performance. Also, the operating system and support libraries will commonly provide code for using the various components of the console. This has the advantage that developers don’t have to know the low-level details of the hardware, and also potentially saves time since different developers won’t have to spend time creating their own versions of these libraries. The advantage of not having to write this low level code is important in early generation projects, because the learning curve for the hardware is already quite high, and there may not be time in the schedule for doing very much of this kind of low-level optimization. Clever developers will slowly replace the system libraries over time, especially with the speed critical subroutines, such as 3D vector math and polygonal set-up. Also, the hardware developer will occasionally improve upon poorly written libraries, so even the less clever developers will eventually benefit from these optimizations. Improvements to the system libraries are a big reason why later generation games can increase dramatically in performance.
Development tools
On the PC, development tools have evolved over the years, and have become quite sophisticated. Commercial companies have focused years of efforts on making powerful, optimal, polished, and easy to use development tools. In contrast, the development tools provided for console video game development are generally provided by the hardware manufacturer, and are usually poorly constructed, have many bugs, are difficult to use, and do not produce optimal results. For example, the C compiler usually doesn’t optimize very well; the debugger is often crude and, ironically, has many bugs; and there usually isn’t a decent software profiler.
Initially developers will rely on these tools, and the first few generations of software will be adversely effected by their poor quality. Over time, clever programmers will become less reliant on the tools that are provided, or will develop techniques to work around the weaknesses of the tools.
In-house tools
In-house tools are one of the most important aspects of producing high performance console video game software. Efficient tools have always been important, but as the data content in video games has grown exponentially over the last few years, in-house tools have become increasingly more important to the overall development process. In the not too distant future, the focus on tool programming techniques may even exceed the focus on run-time programming issues. It is not unreasonable that the most impressive video games in the future may end up being the ones that have the best support tools.
In-house tools tend to evolve to fill the needs of a desired level of technology. Since new consoles tend to have dramatic changes in technology over the predecessor consoles, in-house tools often have to be drastically rewritten or completely replaced to support the new level of technology. For example, a predecessor console may not have had any 3D support, so the tools developed for that console most likely would not have been written to support 3D. When a new console is released that can draw 100,000 polygons per second, then it is generally inefficient to try to graft support for this new technology onto the existing tools, so the original tools are discarded. To continue the previous example, let’s say that the new tool needs to be able to handle environments in the game that average about 500,000 polygons, and have a maximum worst case of 1 million polygons. Most likely the tool will evolve to the point where it runs pretty well for environments of the average case, but will most likely run just fast enough that the slowest case of a 1 million polygons is processed in a tolerable, albeit painful, amount of time. The reasons for this are that tools tend to grow in size and complexity over time, and tools tend to only be optimized to the point that they are not so slow as to be intolerable. Now let’s say that a newer console is released that can now drawn 1 million polygons a second, and now our worst case environment is a whopping 1 billion polygons! Although the previous in-house tool could support a lot of polygons, the tool will still end up being either extensively rewritten or discarded, since the tool will not be able to be easily modified to be efficient enough to deal with this much larger amount of polygons.
The ability of a tool to function efficiently as the data content processed by the tool increases is referred to as the ability of the tool to “scale”. In video game programming, tools are seldom written to scale much beyond the needs of the current technology; therefore, when technology changes dramatically, old tools are commonly discarded, and new tools have to be developed.
The in-house tools can consume a large amount of the programming time of a first generation title, since not only are the tools complicated, but they evolve over time as the run-time game code is implemented. Initial generations of games are created using initial generations of tools. Likewise, later generations of games are created using later generations of tools. As the tools become more flexible and powerful, the developer gains the ability to create more impressive games. This is a big reason why successive generations of console games often make dramatic improvements in performance and quality over their predecessors.
Reuse of code
A problem that stems from the giant gaps in technology between console generations is that it makes it difficult to reuse code that was written for a previous generation of console hardware. Assembly programming is especially difficult to reuse since the CPU usually changes between consoles, but the C programming language isn’t much of a solution either, since the biggest problem is that the hardware configurations and capabilities are so different. Any code dealing directly with the hardware or hardware influenced data structures will have to be discarded. Even code that does something universal in nature, such as mathematical calculations, will most likely need to be rewritten since the new hardware will most likely have some sort of different mathematical model.
Also, just as the in-house tool code becomes outdated, so does game code that is written for less powerful technology. Animation, modeling, character, environment, and particle code will all need to be discarded.
In practice, very little code can be reused between technological leaps in hardware platforms. This means that earlier generation games will not have much code reuse, but each new generation of games for a console will be able to reuse code from its predecessors, and therefore games will tend to improve with each new generation.
Optimization
By definition, having optimal code is preferable to having bulky or less efficient code. It would therefore seem logical to say that to achieve maximum performance from the hardware, all code should be completely optimal. Unfortunately, this is not an easy or even practical thing to achieve, since the writing of completely optimal code has many nuances, and can be very time-consuming. The programmer must be intimately familiar with the details of the hardware. He must fully understand how to implement the code, such as possibly using assembly language since C compilers will often generate inefficient code. The programmer must make certain to best utilize the CPU caches. Also, the programmer should understand how the code may effect other pieces of code, such as the effects of the code on the instruction cache, or the amount of resources that are tied up by his code. The programmer has to know how to effectively use co-processors or other devices. He must develop an algorithm that is maximally efficient when implemented. Also, the programmer will need to measure the code against the theoretical maximum optimal performance to be certain that the code can indeed be considered to be fully optimal.
Writing even highly optimized code for specific hardware is time-consuming, and requires a detailed knowledge of both the hardware and the algorithm to be optimized. It is therefore commonly impractical to attempt to highly optimize even a majority of the code. This is especially true when writing a first generation game, since the developer is not familiar enough with the intricacies of the hardware to be very productive at writing optimal code. Instead, it is more productive to only spend time optimizing the code that most profoundly effects the efficiency of the overall game. Unfortunately, the identifying of what code should be optimized can also be a difficult task. As a general rule, the code to be optimized is often the code that is executed most frequently, but this is not always the case. Performance analyzing, testing, and profiling can help identify inefficient code, but these are also not perfect solutions, and the experience of the programmer becomes an important factor in making smart decisions concerning what code should be optimized.
As a programmer gets more familiar with the intricacies of the hardware, he will be able to perform a greater amount of optimizations. Also, when developing later generation games, the programmer will often be able to reuse previously written optimized code. Plus, there is often more time in the schedule of later generation titles in which to perform optimizations. This accumulation of optimal code is a big reason why games often improve in performance in successive generations.
Other Considerations
There are many other reasons to explain the improvement in performance of next generation software that are not directly related to programming for a video game console. For example, developers will often copy or improve upon the accomplishments of other developers. Likewise, developers will avoid the mistakes made by others. Also, developers acquire and lose employees fairly frequently, which creates a lot of cross-pollination of ideas and techniques between the various development houses. These and many other reasons are important, but since they are not specific to console video game development, they have not been specifically discussed.
CLICK HERE to CONTINUE to PART 3.
If you liked this post, follow me at:
My novels: The Darkening Dream and Untimed |