This is loosely part of a now lengthy series of posts on the making of Crash Bandicoot. Click here for the PREVIOUS or for the FIRST POST .
Below is another journal article I wrote on making Crash in 1999. This was co-written with Naughty Dog uber-programmer Stephen White, who was my co-lead on Crash 2, Crash 3, Jak & Daxter, and Jak 2. It’s long, so I’m breaking it into three parts.
Teaching an Old Dog New Bits
How Console Developers are Able to Improve Performance When the Hardware Hasn’t Changed
by
Andrew S. Gavin
and
Stephen White
Copyright © 1994-99 Andrew Gavin, Stephen White, and Naughty Dog, Inc. All rights reserved.
Console vs. Computer
Personal computers and video game consoles have both made tremendous strides in graphics and audio performance; however, despite these similarities there is a tremendous benefit in understanding some important differences between these two platforms.
Evolution is a good thing, right?
The ability to evolve is the cornerstone behind the long-term success of the IBM PC. Tremendous effort has been taken on the PC so that individual components of the hardware could be replaced as they become inefficient or obsolete, while still maintaining compatibility with existing software. This modularity of the various PC components allows the user to custom build a PC to fit specific needs. While this is a big advantage in general, this flexibility can be a tremendous disadvantage for developing video games. It is the lack of evolution; the virtual immutability of the console hardware that is the greatest advantage to developing high quality, easy to use video game software.
You can choose any flavor, as long as it’s vanilla
The price of the PC’s evolutionary ability comes at the cost of dealing with incompatibility issues through customized drivers and standardization. In the past, it was up to the video game developer to try to write custom code to support as many of the PC configurations as possible. This was a time consuming and expensive process, and regardless of how thorough the developer tried to be, there were always some PC configurations that still had compatibility problems. With the popularity of Microsoft’s window based operating systems, video game developers have been given the more palatable option of allowing other companies to develop the drivers and deal with the bulk of the incompatibility issues; however, this is hardly a panacea, since this necessitates a reliance on “unknown” and difficult to benchmark code, as well as API’s that are designed more for compatibility than optimal performance. The inherit cost of compatibility is compromise. The API code must compromise to support the largest amount of hardware configurations, and likewise, hardware manufacturers make compromises in their hardware design in order to adapt well to the current standards of the API. Also, both the API and the hardware manufacturers have to compromise because of the physical limitations of the PC’s hardware itself, such as bus speed issues.
Who’s in charge here?
The operating system of a PC is quite large and complicated, and is designed to be a powerful and extensively featured multi-tasking environment. In order to support a wide variety of software applications over a wide range of computer configurations, the operating system is designed as a series of layers that distance the software application from the hardware. These layers of abstraction are useful for allowing a software application to function without concerning itself with the specifics of the hardware. This is an exceptionally useful way of maintaining compatibility between hardware and software, but is unfortunately not very efficient with respect to performance. The hardware of a computer is simply a set of interconnected electronic devices. To theoretically maximize the performance of a computer’s hardware, the software application should write directly to the computer’s hardware, and should not share the resources of the hardware, including the CPU, with any other applications. This would maximize the performance of a video game, but would be in direct conflict with the implementations of today’s modern PC operating systems. Even if the operating system could be circumvented, it would then fall upon the video game to be able to support the enormous variety of hardware devices and possible configurations, and would therefore be impractical.
It looked much better on my friend’s PC
Another problem with having a large variety of hardware is that the video game developer cannot reliably predict a user’s personal set-up. This lack of information means that a game can not be easily tailored to exploit the strengths and circumvent the weaknesses of a particular system. For example, if all PC’s had hard-drives that were all equally very fast, then a game could be created that relied on having a fast hard-drive. Similarly, if all PC’s had equally slow hard-drives, but had a lot of memory, then a game could compensate for the lack of hard-drive speed through various techniques, such as caching data in RAM or pre-loading data into RAM. Likewise, if all PC’s had fast hard-drives, and not much memory, then the hard-drive could compensate for the lack of much memory by keeping most of the game on the hard-drive, and only spooling in data as needed.
Another good example is the difference between polygon rendering capabilities. There is an enormous variation in both performance and effects between hardware assisted polygonal rendering, such that both the look of rendered polygons and the amount of polygons that can be rendered in a given amount of time can vary greatly between different machines. The look of polygons could be made consistent by rendering the polygons purely through software, however, the rendering of polygons is very CPU intensive, so may be impractical since less polygons can be drawn, and the CPU has less bandwidth to perform other functions, such as game logic and collision detection.
Other bottlenecks include CD drives, CPU speeds, co-processors, memory access speeds, CPU caches, sound effect capabilities, music capabilities, game controllers, and modem speeds to name a few.
Although many PC video game programmers have made valiant attempts to make their games adapt at run-time to the computers that they are run on, it is difficult for a developer to offer much more than simple cosmetic enhancements, audio additions, or speed improvements. Even if the developer had the game perform various benchmark tests before entering the actual game code, it would be very difficult, and not to mention limiting to the design of a game, for the developer to write code that could efficiently structurally adapt itself to the results of the benchmark.
Which button fires?
A subtle, yet important problem is the large variety of video game controllers that have to be supported by the PC. Having a wide variety of game controllers to choose from may seem at first to be a positive feature since having more seems like it should be better than having less, yet this variety actually has several negative and pervasive repercussions on game design. One problem is that the game designer can not be certain that the user will have a controller with more than a couple of buttons. Keys on the keyboard can be used as additional “buttons”, but this can be impractical or awkward for the user, and also may require that the user configure which operations are mapped to the buttons and keys. Another problem is that the placement of the buttons with respect to each other is not known, so the designer doesn’t know what button arrangement is going to give the user the best gameplay experience. This problem can be somewhat circumvented by allowing the user to remap the actions of the buttons, but this isn’t a perfect solution since the user doesn’t start out with an inherent knowledge of the best way to configure the buttons, so may choose and remain using an awkward button configuration. Also, similar to the button layout, the designer doesn’t know the shape of the controller, so can’t be certain what types of button or controller actions might be uncomfortable to the user.
An additional problem associated with game controllers on the PC is that most PC’s that are sold are not bundled with a game controller. This lack of having a standard, bundled controller means that a video game on the PC should either be designed to be controlled exclusively by the keyboard, or at the very least should allow the user to optionally use a keyboard rather than a game controller. Not allowing the use of the keyboard reduces the base of users that may be interested in buying your game, but allowing the game to be played fully using the keyboard will potentially limit the game’s controls, and therefore limit the game’s overall design.
Of course, even if every PC did come bundled with a standard game controller, there would still be users who would want to use their own non-standard game controllers. The difference, however, is that the non-standard game controllers would either be specific types of controllers, such as a steering wheel controller, or would be variations of the standard game controller, and would therefore include all of the functionality of the original controller. The decision to use the non-standard controller over the standard controller would be a conscious decision made by the user, rather than an arbitrary decision made because there is no standard.
Chasing a moving target
Another problem associated with the PC’s evolutionary ability is that it is difficult to predict the performance of the final target platform. The development of video games has become an expensive and time consuming endeavor, with budgets in the millions, and multi year schedules that are often unpredictable. The PC video game developer has to predict the performance of the target machine far in advance of the release of the game, which is difficult indeed considering the volatility of schedules, and the rapid advancements in technology. Underestimating the target can cause the game to seem dated or under-powered, and overestimating the target could limit the installed base of potential consumers. Both could be costly mistakes.
Extinction vs. evolution
While PC’s have become more powerful through continual evolution, video game consoles advance suddenly with the appearance of an entirely new console onto the market. As new consoles flourish, older consoles eventually lose popularity and fade away. The life cycle of a console has a clearly defined beginning: the launch of the console into the market. The predicted date of the launch is normally announced well in advance of the launch, and video game development is begun early enough before the launch so that at least a handful of video game titles will be available when the console reaches the market. The end of a console’s life cycle is far less clearly defined, and is sometimes defined to be the time when the hardware developer of the console announces that there will no longer be any internal support for that console. A more practical definition is that the end of a console’s life cycle is when the public quits buying much software for that console. Of course, the hardware developer would want to extend the life cycle of a console for as long as possible, but stiff competition in the market has caused hardware developers to often follow up the launch of a console by immediately working on the design of the next console.
Each and every one is exactly the same
Unlike PC’s which can vary wildly from computer to computer, consoles of a particular model are designed to be exactly the same. Okay, so not exactly the same, but close enough that different revisions between the hardware generally only vary in minor ways that are usually pretty minor from the perspective of the video game developer, and are normally transparent to the user. Also, the console comes with at least one standard game controller, and has standardized peripheral connections.
The general premise is that game software can be written with an understanding that the base hardware will remain consistent throughout the life-span of the console; therefore, a game can be tailored to both exploit the strengths of the hardware, and to circumvent the weaknesses.
The consistency of the hardware components allows a console to have a very small, low level operating system, and the video game developer is often given the ability to either talk to the hardware components directly, or to an extremely low hardware abstraction layer.
The performance of the components of the hardware is virtually identical for all consoles of a given model, such that the game will look the same and play the same on any console. This allows the video game developer to design, implement, and test a video game on a small number of consoles, and be assured that the game will play virtually the same for all consoles.
CLICK HERE FOR PART 2
If you liked this post, follow me at:
My novels: The Darkening Dream and Untimed |
sharethis_button(); ?>