All Your Base Are Belong to Us

Title: All Your Base Are Belong to Us

Author: Harold Goldberg

Genre: Video Game History

Length: 306 pages

Read: April 5, 2011

Summary: All the good stories!

_

This new addition to the field of video game histories is a whirlwind tour of the medium from the 70s blips and blobs to the Facebook games of today, with everything in the middle included. Given the herculean task of covering 45+ years of gaming history in a completely serial fashion would probably result in about 4,000 pages, Goldberg has wisely chosen to snapshot pivotal stories. He seizes on some of the most important games, and even more importantly, the zany cast of creatives who made them.

My personal favorite is Chapter 8, “The Playstation’s Crash” featuring none other than that lovable Bandicoot, myself, Jason, Mark Cerny and various other friends. This chapter covers loosely the same subject matter that Jason and I detail in our lengthy series of Crash blogs (found here). It’s even 98% accurate! 🙂 If you enjoyed our Crash posts, I highly recommend you check out this book, as it includes not only some extra insights there, but 18 other chapters about other vitally important games or moments in gaming history.

These include old Atari, the great 80s crash, Mario, Tetris, EA, Adventure Games, Sierra Online, EverQuest, WOW, Bioshock, Rockstar, Bejeweled, and more. All are very entertaining, and focus heavily on the personalities behind the scenes — and boy, are there personalities in this business! In many ways this reminds me of Hackers, which is dated, but was one of my favorite books on the 80s computer revolution.

So click, buy, and enjoy!

For my series on Making Crash Bandicoot, CLICK HERE.

Crash Bandicoot – Teaching an Old Dog New Bits – part 3

This is the twelfth of a now lengthy series of posts on the making of Crash Bandicoot. Click here for the PREVIOUS or for the BEGINNING of the whole mess.

The text below is another journal article I wrote on making Crash in 1999. This is the third part, the FIRST can be found here.

_

The Crash Bandicoot Trilogy: A Practical Example

The three Crash Bandicoot games represent a clear example of the process of technology and gameplay refinement on a single platform.  Crash Bandicoot was Naughty Dog’s first game on the Sony Playstation game console, and its first fully 3D game.  With Crash Bandicoot 2: Cortex Strikes Back and Crash Bandicoot: Warped, we were able to improve the technology, and offer a slicker more detailed game experience in successively less development time.  With the exception of added support for the Analog Joystick, Dual Shock Controller, and Sony Pocketstation the hardware platforms for the three titles are identical.

Timely and reasonably orderly development of a video game title is about risk management.  Given that you have a certain amount of time to develop the title, you can only allow for a certain quantity of gameplay and technology risks during the course of development.  One of the principle ways in which successive games improve is by the reuse of these risks.  Most solutions which worked for the earlier game will work again, if desired, in the new game.  In addition, many techniques can be gleaned from other games on the same machine that have been released during the elapsed time.

In the case of sequels such as the later Crash games there is even more reduction of risk.  Most gameplay risks, as well as significant art, code, and sound can be reused.  This allows the development team to concentrate on adding new features, while at the same time retaining all the good things about the old game.  The result is that sequels are empirically better games.

Crash Bandicoot   –   how do we do character action in 3D?

Development: September 1994 – September 1996

Staff: 9 people: 3 programmers, 4 artists, 1 designer, 1 support

Premise: Do for the ultra popular platform action game genre what Virtua Fighter had done for fighting games: bring it into 3D.  Design a very likeable broad market character and place him in a fun, and fast paced action game.  Attempt to occupy the “official character” niche on the then empty Playstation market.  Remember, that by the fall of 1994 no one had yet produced an effective 3D platform action game.

Gameplay risk: how do you design and control an action character in 3D such that the feel is as natural and intuitive as in 2D?

When we first asked ourselves, “what do you get if you put Sonic the Hedgehog (or any other character action game for that matter) in 3D,” the answer that came to mind was: “a game where you always see Sonic’s Ass.”  The entire question of how to make a platform game in 3D was the single largest design risk on the project.  We spent 9 months struggling with this before there was a single fun level.  However, by the time this happened we had formulated many of the basic concepts of the Crash gameplay.

We were trying to preserve all of the good elements of classic platform games.  To us this meant really good control, faced paced action, and progressively ramping challenges.  In order to maintain a very solid control feel we opted to keep the camera relatively stable, and to orient the control axis with respect to the camera.  Basically this means that Crash moves into the screen when you push up on the joypad.  This may seem obvious, but it was not at the time, and there are many 3D games which use different (and usually inferior) schemes.

Technical risk: how do you get the Playstation CPU and GPU to draw complex organic scenes with a high degree of texture and color complexity, good sorting, and a solid high resolution look?

It took quite a while, a few clever tricks, and not a little bit of assembly writing and rewriting of the polygon engines.  One of our major realizations was that on a CD based game system with a 33mhz processor, it is favorable to pre-compute many kinds of data in non real-time on the faster workstations, and then use a lean fast game engine to deliver high performance.

Technical risk: how do the artists build and maintain roughly 1 million polygon levels with per poly and per vertex texture and color assignment?

The challenge of constructing large detailed levels turned out to be one of the biggest challenges of the whole project.  We didn’t want to duplicate the huge amount of work that has gone into making the commercial 3D modeling packages, so we chose to integrate with one of them.  We tried Softimage at first, but a number of factors caused us to switch to AliasPower Animator.  When we began the project it was not possible to load and view a one million polygon level on a 200mhz R4400 Indigo II Extreme.  We spent several months creating a system and tools by which smaller chunks of the level could be hierarchically assembled into a larger whole.

In addition, the commercial packages were not aware that anyone would desire per polygon and per vertex control over texture, color, and shading information.  They used a projective texture model preferred by the film and effects industry.  In order to maximize the limited amount of memory on the Playstation we knew we would need to have very detailed control.  So we created a suite of custom tools to aid in the assignment of surface details to Power Animator models.  Many of these features have since folded into the commercial programs, but at the time we were among the first to make use of this style of model construction.

Technical risk: how do you get a 200mhz R4400 Indigo II to process a 1 million polygon level?

For the first time in our experience, it became necessary to put some real thought into the design of the offline data processing pipeline.  When we first wrote the level processing tool it took 20 hours to run a small test case.  A crisis ensued and we were forced to both seriously optimize the performance of the tool and multithread it so that the process could be distributed across a number of workstations.

Conventional wisdom says that game tools are child’s play.  Historically speaking, this is a fair judgment — 2D games almost never involve either sophisticated preprocessing or huge data sets.  But now that game consoles house dedicated polygon rendering hardware, the kid gloves are off.

In Crash Bandicoot players explore levels composed of over a million polygons.  Quick and dirty techniques that work for smaller data sets (e.g., repeated linear searches instead of binary searches or hash table lookups) no longer suffice.  Data structures now matter — choosing one that doesn’t scale well as the problem size increases leads to level processing tasks that take hours instead of seconds.

The problems have gotten correspondingly harder, too.  Building an optimal BSP tree, finding ideal polygon strips, determining the best way to pack data into fixed-size pages for CD streaming — these are all tough problems by any metric, academic or practical.

To make matters worse, game tools undergo constant revision as the run-time engine evolves towards the bleeding edge of available technology.  Unlike many jobs, where programmers write functional units according to a rigid a priori specification, games begin with a vague “what-if” technical spec — one that inevitably changes as the team learns how to best exploit the target machine for graphics and gameplay.

The Crash tools became a test bed for developing techniques for large database management, parallel execution, data flexibility, and complicated compression and bin packing techniques.

Art / Technical risk: how do you make low poly 3D characters that don’t look like the “Money for Nothing” video?

From the beginning, the Crash art design was very cartoon in style.  We wanted to back up our organic stylized environments with highly animated cartoon characters that looked 3D, but not polygonal.  By using a single skinned polygonal mesh model similar to the kind used in cutting edge special effects shots (except with a lot less polygons),  we were able to create a three dimensional cartoon look.  Unlike the traditional “chain of sausages” style of modeling, the single skin allows interesting “squash and stretch” style animation like that in traditional cartoons.

By very careful hand modeling, and judicious use of both textured and shaded polygons, we were able to keep these models within reasonable polygon limits.  In addition, it was our belief that because Crash was the most important thing in the game, he deserved a substantial percentage of the game’s resources.  Our animation system allows Crash to have unique facial expressions for each animation, helping to convey his personality.

Technical risk: how do you fit a million polygons, tons of textures, thousands of frames of animation, and lots of creatures into a couple megs of memory?

Perhaps the single largest technical risk of the entire project was the memory issue.  Although there was a plan from the beginning, this issue was not tackled until February of 1996.  At this point we had over 20 levels in various stages of completion, all of which consumed between 2 and 5 megabytes of memory.  They had to fit into about 1.2 megabytes of active area.

At the beginning of the project we had decided that the CD was the system resource least likely to be fully utilized, and that system memory (of various sorts) was going to be one of the greatest constraints.  We planned to trade CD bandwidth and space for increased level size.

The Crash series employs an extremely complicated virtual memory scheme which dynamically swaps into memory any kind of game component: geometry, animation, texture, code, sound, collision data, camera data, etc.  A workstation based tool called NPT implements an expert system for laying out the disk.  This tool belongs to the class of formal Artificially Intelligence programs.  Its job is to figure out how the 500 to 1000 resources that make up a Crash level can be arranged so as to never have more than 1.2 megabytes needed in memory at any time.  A multithreaded virtual memory implementation follows the instructions produced by the tool in order to achieve this effect at run time.  Together they manage and optimize the essential resources of main, texture, and sound RAM based on a larger CD based database.

Technical/Design risk: what to do with the camera?

With the 32 bit generation of games, cameras have become a first class character in any 3D game.  However, we did not realize this until considerably into the project.  Crash represents our first tentative stab at how to do an aesthetic job of controlling the camera without detracting from gameplay.  Although it was rewritten perhaps five times during the project, the final camera is fairly straightforward from the perspective of the user.  None of the crop of 1995 and 1996 3D action games played very well until Mario 64 and Crash.  These two games, while very different, were released within two months of each other, and we were essentially finished with Crash when we first saw Mario.  Earlier games had featured some inducement of motion sickness and a difficulty for the players in quickly judging the layout of the scene.  In order to enhance the tight, high impact feel of Crash’s gameplay, we were fairly conservative with the camera.  As a result Crash retains the quick action feel of the traditional 2D platform game more faithfully than other 3D games.

Technical risk: how do you make a character collide in a reasonable fashion with an arbitrary 3D world… at 30 frames a second?

Another of the games more difficult challenges was in the area of collision detection.  From the beginning we believed this would be difficult, and indeed it was.  For action games, collision is a critical part of the overall feel of the game.  Since the player is looking down on a character in the 3rd person he is intimately aware when the collision does not react reasonably.

Crash can often be within a meter or two of several hundred polygons.  This means that the game has to store and process a great deal of data in order to calculate the collision reactions.  We had to comb through the computer science literature for innovative new ways of compressing and storing this database.  One of our programmers spent better than six months on the collision detection part of the game, writing and rewriting the entire system half a dozen times.  Finally, with some very clever ideas, and a lot of hacks, it ended up working reasonably well.

Technical risk: how do you program, coordinate, and maintain the code for several hundred different game objects?

Object control code, which the gaming world euphemistically calls AI, typically runs only a couple of times per frame. For this kind of code, speed of implementation, flexibility, and ease of later modification are the most important requirements.  This is because games are all about gameplay, and good gameplay only comes from constant experimentation with and extensive reworking of the code that controls the game’s objects.

The constructs and abstractions of standard programming languages are not well suited to object authoring, particularly when it comes to flow of control and state.  For Crash Bandicoot we implemented GOOL (Game Oriented Object LISP), a compiled language designed specifically for object control code that addresses the limitations of traditional languages.

Having a custom language whose primitives and constructs both lend them selves to the general task (object programming), and are customizable to the specific task (a particular object) makes it much easier to write clean descriptive code very quickly.  GOOL makes it possible to prototype a new creature or object in as little as 10 minutes.  New things can be tried and quickly elaborated or discarded. If the object doesn’t work out it can be pulled from the game in seconds without leaving any hard to find and wasteful traces behind in the source.  In addition, since GOOL is a compiled language produced by an advanced register coloring compiler with reductions, flow analysis, and simple continuations it is at least as efficient as C, more so in many cases because of its more specific knowledge of the task at hand.  The use of a custom compiler allowed us to escape many of the classic problems of C.

Crash Bandicoot 2: Cortex Strikes Back  –   Bigger and Badder!

Development: October 1996 – November 1997

Staff: 14 people: 4 programmers, 6 artists, 1 designer, 3 support

Premise: Make a sequel to the best selling Crash Bandicoot that delivered on all the good elements of the first game, as well as correcting many of our mistakes.  Increasing the technical muscle of the game, and improving upon the gameplay, all without looking “been there done that…” in one year.

For Crash 2 we rewrote approximately 80% of the game engine and tool code.  We did so module by module in order to allow continuous development of game levels.  Having learned during Crash 1 about what we really needed out of each module we proceeded to rewrite them rapidly so that they offered greater speed and flexibility.

Technical risk: A fancy new tools pipeline designed to deal with a constantly changing game engine?

The workstation based tools pipeline was a crucial part of Crash 1.  However, at the time of its original conception, it was not clear that this was going to be the case.  The new Crash 2 tools pipe was built around a consistent database structure designed to allow the evolution of level databases, automatic I/O for complex data types, data browsing and searching, and a number of other features.  The pipe was modularized and various built-in restrictions were removed.  The new pipe was able to support the easy addition of arbitrary new types of data and information to various objects without outdating old information.

We could never have designed such a clean tool program that would be able to handle the changes and additions of Crash 2 and Warped at the beginning of the first game.  Being aware of what was needed at the start of the rewrite allowed us to design a general infrastructure that could support all of the features we had in mind.  This infrastructure was then flexible enough to support the new features added to both sequels.

Technical/process risk: The process of making and refining levels took too long during the first game.  Can we improve it?

The most significant bottleneck in making Crash 1 was the overall time it took to build and tune a level.  So for Crash 2 we took a serious look at this process and attempted to improve it.

For the artists, the task of surfacing polygons (applying texture and color) was very time consuming.  Therefore, we made improvements to our surfacing tools.

For both the artists and designers, the specification of different resources in the level was exceedingly tedious.  So we added a number of modules to the tools pipeline designed to automatically balance and distribute many of these resources, as well as to auto calculate the active ranges of objects and other resources that had to be controlled manually in the first game.  In addition, we moved the specification of camera, camera info, game objects, and game object info into new text based configuration files.  These files allowed programmers and designers to edit and add information more easily, and it also allowed the programmers to add new kinds of information quickly and easily.

The result of this process was not really that levels took any less time to make, but that the complexity allowed was several times that of the first game.  Crash 2 levels are about twice as large, have integrated bonus levels, multiple branches, “hard paths,” and three or four times as many creatures, each with an order of magnitude more settable parameters.  The overall turn around time for changing tunable level information was brought down significantly.

Technical/Design risk: can we make a better more flexible camera?

The camera was one of the things in Crash 1 with which we were least satisfied.  So in order to open up the game and make it feel more lifelike, we allowed the camera to look around much more, and supported a much wider set of branching and transition cameras.  In addition, arbitrary parameterized information was added to the camera system so that at any location the camera had more than 100 possible settable options.

If the two games are compared side by side, it can be seen that the overall layouts of Crash 2 levels are much larger and more complicated.  The camera is more natural and fluid, and there are numerous dynamic camera transitions and effects which were not present in the first game.  Even though the Crash 2 camera was written entirely from scratch, the lessons learned during the course of Crash 1 allowed it to be more sophisticated and aggressive, and it executed faster than its predecessor.

Optimization risk: can we put more on screen?

Crash 1 was one of the fastest games of its generation, delivering high detail images at 30 frames per second.  Nevertheless, for Crash 2 we wanted to put twice as much on screen, yet still maintain that frame-rate.  In order to achieve this goal we had one programmer doing nothing but re-coding areas of the engine into better assembly for the entire length of the project.  Dramatically increasing performance does not just mean moving instructions around; it is a complex and involved process.  First we study the performance of all relevant areas of the hardware in a scientific and systematic fashion.  Profiles are made of cache latencies, coprocessor parallel processing constraints, etc.  Game data structures are then carefully rearranged to aid the engine in loading and processing them in the most efficient way.  Complicated compression and caching schemes are explored to both reduce storage size (often linked to performance due to bus bandwidth) and to speed up the code.

Simultaneously we modularized the game engine to add more flexibility and features.  Crash 2 has more effects, such as Z-buffer-like water effects, weather, reflections, particles, talking hologram heads, etc.  Many annoying limitations of the Crash 1 drawing pipeline were removed, and most importantly, the overall speed was increased by more than two-fold.

In order to further improve performance and allow more simultaneous creatures on screen, we re-coded the GOOL interpreter into assembly, and also modified the compiler to produce native MIPS assembly for even better performance.

Technical risk: if we can put more on screen, can we fit it in memory?

We firmly believe that all three Crash games make use of the CD in a more aggressive fashion than most Playstation games.  So in order to fit the even larger Crash 2 levels into memory (often up to 12 megabytes a level) we had to increase the efficiency of the virtual memory scheme even more.  To do so we rewrote the AI that lays out the CD, employing several new algorithms.  Since different levels need different solutions we created a system by which the program could automatically try different approaches with different parameters, and then pick the best one.

In addition, since Crash 2 has about 8 times the animation of the first game, we needed to really reduce the size of the data without sacrificing the quality of the animation.  After numerous rewrites the animation was stored as a special bitstream compressed in all 4 dimensions.

Design risk: can we deliver a gameplay experience that is more than just “additional levels of Crash?”

We believe that game sequels are more than an opportunity to just go “back to the bank.”  For both of the Crash sequels we tried to give the player a new game, that while very much in the same style, was empirically a bigger, better game.  So with the increased capacity of the new Crash 2 engine we attempted to build larger more interesting levels with a greater variety of gameplay, and a more even and carefully constructed level of difficulty progression.  Crash 2 has about twice as many creatures as Crash 1, and their behaviors are significantly more sophisticated.  For example, instead of just putting the original “turtle” back into the game, we added two new and improved turtles, which had all the attributes of the Crash 1 turtle, but also had some additional differences and features.  In this manner we tried to build on the work from the first game.

Crash himself remains the best example.  In the second game Crash retains all of the moves from the first, but gains a number of interesting additional moves: crawling, ducking, sliding, belly flopping, plus dozens of custom coded animated death sequences.  Additionally, Crash has a number of new control specs: ice, surfboard, jet-pack, baby bear riding, underground digging, and hanging.  These mechanics provide entirely new game machines to help increase the variety and fun factor of the game.  It would be very difficult to include all of these in a first generation game because so much time is spent refining the basic mechanic.

Technically, these additions and enhancements were aided by the new more flexible information specification of the new tools pipeline, and by additions to the GOOL programming language based on lessons learned from the first game.

Crash Bandicoot: Warped!  –   Every trick in the book!

Development: January 1998 – November 1998

Staff: 15 people: 3 programmers, 7 artists, 3 designers, 2 support

Premise: With only 9 months in which to finish by Christmas, we gave ourselves the challenge of making a third Crash game which would be even cooler and more fun than the previous one.  We chose a new time travel theme and wanted to differentiate the graphic look and really increase the amount and variety of gameplay.  This included power-ups, better bosses, lots of new control mechanics, an open look, and multiple playable characters.

Technical/Process risk: the tight deadline and a smaller programming staff required us to explore options for even greater efficiency.

The Crash Warped production schedule required that we complete a level every week.  This was nearly twice the rate typical of Crash levels.  In addition, many of the new levels for Warped required new engines or sub-engines designed to give them a more free-roaming 3D style.  In order to facilitate this process we wrote an interactive listener which allowed GOOL based game objects to be dynamically examined, debugged, and tuned.  We were then able to set the parameters and features of objects in real-time, greatly improving our ability to tune and debug levels.  Various other visual debugging and diagnostic techniques were also introduced as well.

Knowledge from the previous game allowed us to further pipeline various processes.  The Crash series is heavily localized for different territories.  The European version supports five languages, text and speech, including lip sync.  In addition, it was entirely re-timed, and the animation was resampled for 25hz.  The Japanese version has Pocketstation support, a complete language translation, and a number of additional country specific features.  We were able to build in the features needed to make this happen as we wrote the US version of the game.  The GOOL language was expanded to allow near automatic conversion of character control timing for PAL.

Technical/Art risk: could the trademark look of the Crash series be opened up to offer long distance views and to deliver levels with free-roaming style gameplay?

In order to further differentiate the third Crash game, we modified the engine to support long distance views and Level of Detail (LOD) features.  Crash Warped has a much more open look than the previous games, with views up to ten times as far.  The background polygon resource manager needed some serious reworking in order to handle this kind of increased polygon load, as did the AI memory manager.  We developed the new LOD system to help manage these distance views.  These kinds of system complexities would not have been feasible in a first generation game, since when we started Crash 1, the concept of LOD in games was almost completely undeveloped, and just getting a general engine working was enough of a technical hurdle.

Similarly, the stability of the main engine allowed us to concentrate more programmer time on creating and polishing the new sub-engines:  jet-ski, motorcycle, and biplane.

Gameplay risk: could we make the gameplay in the new Crash significantly different from the previous ones and yet maintain the good elements of the first two games?

The new free-roaming style levels presented a great gameplay challenge.  We felt it necessary to maintain the fast-paced, forward driven Crash style of gameplay even in this new context.  The jet-ski in particular represented a new kind of level that was not present in the first two games.  It is part race game, part vehicle game, and part regular Crash level.  By combining familiar elements like the boxes and creatures with the new mechanics, we could add to the gameplay variety without sacrificing the consistency of the game.

In addition to jet-ski, biplane, and motorcycle levels, we also added a number of other new mechanics (swimming, bazooka, baby T-rex, etc.) and brought back most of Crash 2’s extensive control set.  We tried to give each level one or more special hooks by adding gameplay and effect features.  Warped has nearly twice as many different creatures and gameplay modes as Crash 2.  The third game clocked in at 122,000 lines of GOOL object control code, as compared to 68,000 for the second game and 49,000 for the first!  The stability of the basic system and the proven technical structure allowed the programmers to concentrate on gameplay features, packing more fun into the game.  This was only possible because on a fixed hardware like the Playstation, we were fairly confident that the Warped engine was reasonably optimal for the Crash style of game.  Had we been making the game for a moving target such as the PC, we would have been forced to spend significant time updating to match the new target, and would have not been able to focus on gameplay.

Furthermore, we had time, even with such a tight schedule, to add more game longevity features.  The Japanese version of Warped has Pocketstation support.  We improved the quality of the boss characters significantly, improved the tuning of the game, added power-ups that can be taken back to previously played levels, and added a cool new time trial mode.  Crash games have always had two modes of play for each level: completion (represented by crystals) and box completion (represented by gems).  In Warped we added the time trial mode (represented by relics).  This innovative new gameplay mode allows players to compete against themselves, each other, and preset goals in the area of timed level completion.  Because of this each level has much more replay value and it takes more than twice as long to complete Warped with 100% as it does Crash 2.

Technical risk: more more more!

As usual, we felt the need to add lots more to the new game.  Since most of Crash 2’s animations were still appropriate, we concentrated on adding new ones.  Warped has a unique animated death for nearly every way in which Crash can loose a life.  It has several times again the animation of the second game.  In addition, we added new effects like the arbitrary water surface, and large scale water effects.  Every character, including Crash got a fancy new shadow that mirrors the animated shape of the character.

All these additions forced us to squeeze even harder to get the levels into memory.  Additional code overlays, redundant code mergers, and the sacrifice of thirteen polka dotted goats to the level compression AI were necessary.

Conclusions

In conclusion, the consistency of the console hardware platform over its lifetime allows the developer an opportunity to successively improve his or her code, taking advantage of techniques and knowledge learned by themselves and others.  With each additional game the amount of basic infrastructure programming that must be done is reduced, and so more energy can be put into other pursuits, such as graphical and gameplay refinements.

_

Yet more Crash Bandicoot posts can be found here.

Subscribe to the blog (on the right), or follow me at:

Andy:  or blog

Also, peek at my novel in progress: The Darkening Dream

or more posts on

GAMES or BOOKS/MOVIES/TV or WRITING or FOOD.

Crash Bandicoot – Teaching an Old Dog New Bits – part 2

This is the eleventh of a now lengthy series of posts on the making of Crash Bandicoot. Click here for the PREVIOUS or for the BEGINNING of the whole mess.

The text below is another journal article I wrote on making Crash in 1999. This is the second part, the FIRST can be found here.

 

And finally to the point!

Both the rapid lifecycle of a video game console and the consistency of the hardware promote video game development strategies that are often very different from the strategies used to make PC video games.   A side-effect of these strategies and the console development environment is that video games released later in the life of a console tend to be incrementally more impressive than earlier titles, even though the hardware hasn’t changed.  Theoretically, since the hardware doesn’t change, first generation software can be as equally impressive as later generation titles, but in reality this is seldom the case.  It may seem obvious that a developer should try to make a first generation title as impressive as a last generation title, but actually this strategy has been the downfall of many talented developers.  There are many good and valid reasons why software improves over time, and the understanding and strategizing about these reasons can greatly improve the chances for a developer to be successful in the marketplace.

Difficulties of Console Video Game Development

There are many difficulties that are encountered when developing a console video game, but the following is a list of several major issues:

  • Learning curve
  • Hardware availability and reliability
  • Bottlenecks
  • Operating System / Libraries
  • Development tools
  • In-house tools
  • Reuse of code
  • Optimization

Learning curve

The learning curve may be the most obvious of all difficulties, and is often one of the most disruptive elements of a video game’s development schedule.  In the past, video games were often developed by small groups of one or more people, had small budgets, ran in a small amount of memory, and had short schedules.  The graphics were almost always 2D, and the mathematics of the game were rarely more than simple algebra.  Today, video games have become much more complicated, and often require extremely sophisticated algorithms and mathematics.  Also, the pure size of the data within a game has made both the run-time code and the tool pipeline require extremely sophisticated solutions for data management issues.  Furthermore, 3D mathematics and renderings can be very CPU intensive, so new tricks and techniques are constantly being created.   Also, the developer will often have to use complex commercial tools, such as 3D modeling packages, to generate the game’s graphics and data.  Add into this the fact that Operating Systems, API’s, and hardware components are continually changing, and it should be obvious that just staying current with the latest technology requires an incredible amount of time, and can have a big impact on the schedule of a game.

The console video game developer has the additional burden that, unlike the PC where the hardware evolves more on a component or API level, new console hardware is normally drastically different and more powerful than the preceding hardware.  The console developer has to learn many new things, such as new CPU’s, new operating systems, new libraries, new graphics devices, new audio devices, new peripherals, new storage devices, new DMA techniques, new co-processors, as well as various other hardware components.  Also, the console developer usually has to learn a new development environment, including a new C compiler, a new assembler, a new debugger, and slew of new support tools.  To complicate matters, new consoles normally have many bugs in such things as the hardware, the operating system, the software libraries, and in the various components of the development environment.

The learning curve of the console hardware is logarithmic in that it is very steep at first, but tends to drop off dramatically by the end of the console life-span.  This initial steep learning curve is why often the first generation software isn’t usually as good as later software.

Hardware availability and reliability

Hardware isn’t very useful without software, and software takes a long time to develop, so it is important to hardware developers to try to encourage software developers to begin software development well in advance of the launch date of the hardware.  It is not uncommon for developers to begin working on a title even before the hardware development kits are available.  To do this, developers will start working on things that don’t depend on the hardware, such as some common tools, and they may also resort to emulating the hardware through software emulation.  Obviously, this technique is not likely to produce software that maximizes the performance of the hardware, but it is done nevertheless because of the time constraints of finishing a product as close as possible to the launch of the console into the market.  The finished first generation game’s performance is not going to be as good as later generations of games, but this compromise is deemed acceptable in order to achieve the desired schedule.

When the hardware does become available for developers, it is usually only available in limited quantity, is normally very expensive, and eventually ends up being replaced by cheaper and more reliable versions of the hardware at some later time.  Early revisions of the hardware may not be fully functional, or may have components that run at a reduced speed, so are difficult to fully assess, and are quite scarce since the hardware developer doesn’t want to make very many of them.  Even when more dependable hardware development kits becomes available, they are usually difficult to get, since production of these kits is slow and expensive, so quantities are low, and software developers are in competition to get them.

The development kits, especially the initial hardware, tend to have bugs that have to be worked around or avoided.  Also, the hardware tends to have contact connection problems so that it is susceptible to vibrations, oxidation, and overheating.  These problems generally improve with new revisions of the development hardware.

All of these reasons will contribute to both a significant initial learning curve, and a physical bottleneck of having an insufficient number of development kits.   This will have a negative impact on a game’s schedule, and the quality of first generation software often suffers as a consequence.

Bottlenecks

An extremely important aspect to console game development is the analysis of the console’s bottlenecks, strengths, weaknesses, and overall performance.  This is critical for developing high performance games, since each component of the console has a fixed theoretical maximum performance, and undershooting that performance may cause your game to appear under-powered, while overshooting may cause you to have to do major reworking of the game’s programming and/or design.  Also, overshooting performance may cause the game to run at an undesirable frame rate, which could compromise the look and feel of the game.

The clever developer will try to design the game to exploit the strengths of the machine, and circumvent the weaknesses.  To do this, the developer must be as familiar as possible with the limitations of the machine.  First, the developer will look at the schematic of the hardware to find out the documented sizes, speeds, connections, caches, and transfer rates of the hardware.  Next, the developer should do hands-on analysis of the machine to look for common weaknesses, such as:  slow CPU’s, limited main memory, limited video memory, limited sound memory, slow BUS speeds, slow RAM access, small data caches, small instruction caches, small texture caches, slow storage devices, slow 3D math support, slow interrupt handling, slow game controller reading, slow system routines, and slow polygon rendering speeds.  Some of these things are easy to analyze, such as the size of video memory, but some of these things are much trickier, such as polygon rendering speeds, because the speed will vary based on many factors, such as source size, destination size, texture bit depth, caching, translucency, and z-buffering, to name just a few.  The developer will need to write several pieces of test code to study the performance of the various hardware components, and should not necessarily trust the statistics found in the documentation, since these are often wrong or misleading.

A developer should use a profiler to analyze where speed losses are occurring in the run-time code.  Most programmers will spend time optimizing code because the programmer suspects that code is slow, but doesn’t have any empirical proof.  This lack of empirical data means that the programmer will invariable waste a lot of time optimizing things that don’t really need to be optimized, and will not optimize things that would have greatly benefited from optimization. Unfortunately, a decent profiler is almost never included in the development software, so it is usually up to the individual developer to write his own profiling software.

The testing of performance is an extremely important tool to use in order to maximize performance.  Often the reason why software improves between generations is that the developers slowly learn over time how to fully understand the bottlenecks, how to circumvent the bottlenecks, and how to identify what actually constitutes a bottleneck.

Operating system / Libraries

Although the consoles tend to have very small operating systems and libraries when compared to the operating systems found on the PC, they are still an important factor of console video game development.

Operating systems and support libraries on video game consoles are used to fill many needs.  One such need is that the hardware developer will often attempt to save money on the production of console hardware by switching to cheaper components, or by integrating various components together.  It is up to the operating system to enable these changes, while having the effects of these changes be transparent to both the consumer and the developer.  The more that the operating system abstracts the hardware, the easier it is for the hardware developer to make changes to the hardware.  However, remember that this abstraction of the hardware comes at the price of reduced potential performance.  Also, the operating system and support libraries will commonly provide code for using the various components of the console.  This has the advantage that developers don’t have to know the low-level details of the hardware, and also potentially saves time since different developers won’t have to spend time creating their own versions of these libraries.  The advantage of not having to write this low level code is important in early generation projects, because the learning curve for the hardware is already quite high, and there may not be time in the schedule for doing very much of this kind of low-level optimization.  Clever developers will slowly replace the system libraries over time, especially with the speed critical subroutines, such as 3D vector math and polygonal set-up.  Also, the hardware developer will occasionally improve upon poorly written libraries, so even the less clever developers will eventually benefit from these optimizations. Improvements to the system libraries are a big reason why later generation games can increase dramatically in performance.

Development tools

On the PC, development tools have evolved over the years, and have become quite sophisticated.  Commercial companies have focused years of efforts on making powerful, optimal, polished, and easy to use development tools.  In contrast, the development tools provided for console video game development are generally provided by the hardware manufacturer, and are usually poorly constructed, have many bugs, are difficult to use, and do not produce optimal results.  For example, the C compiler usually doesn’t optimize very well; the debugger is often crude and, ironically, has many bugs; and there usually isn’t a decent software profiler.

Initially developers will rely on these tools, and the first few generations of software will be adversely effected by their poor quality.  Over time, clever programmers will become less reliant on the tools that are provided, or will develop techniques to work around the weaknesses of the tools.

In-house tools

In-house tools are one of the most important aspects of producing high performance console video game software.  Efficient tools have always been important, but as the data content in video games has grown exponentially over the last few years, in-house tools have become increasingly more important to the overall development process.  In the not too distant future, the focus on tool programming techniques may even exceed the focus on run-time programming issues.  It is not unreasonable that the most impressive video games in the future may end up being the ones that have the best support tools.

In-house tools tend to evolve to fill the needs of a desired level of technology.  Since new consoles tend to have dramatic changes in technology over the predecessor consoles, in-house tools often have to be drastically rewritten or completely replaced to support the new level of technology.  For example, a predecessor console may not have had any 3D support, so the tools developed for that console most likely would not have been written to support 3D.  When a new console is released that can draw 100,000 polygons per second, then it is generally inefficient to try to graft support for this new technology onto the existing tools, so the original tools are discarded.  To continue the previous example, let’s say that the new tool needs to be able to handle environments in the game that average about 500,000 polygons, and have a maximum worst case of 1 million polygons.  Most likely the tool will evolve to the point where it runs pretty well for environments of the average case, but will most likely run just fast enough that the slowest case of a 1 million polygons is processed in a tolerable, albeit painful, amount of time.  The reasons for this are that tools tend to grow in size and complexity over time, and tools tend to only be optimized to the point that they are not so slow as to be intolerable.  Now let’s say that a newer console is released that can now drawn 1 million polygons a second, and now our worst case environment is a whopping 1 billion polygons!  Although the previous in-house tool could support a lot of polygons, the tool will still end up being either extensively rewritten or discarded, since the tool will not be able to be easily modified to be efficient enough to deal with this much larger amount of polygons.

The ability of a tool to function efficiently as the data content processed by the tool increases is referred to as the ability of the tool to “scale”.  In video game programming, tools are seldom written to scale much beyond the needs of the current technology; therefore, when technology changes dramatically, old tools are commonly discarded, and new tools have to be developed.

The in-house tools can consume a large amount of the programming time of a first generation title, since not only are the tools complicated, but they evolve over time as the run-time game code is implemented.  Initial generations of games are created using initial generations of tools.  Likewise, later generations of games are created using later generations of tools.  As the tools become more flexible and powerful, the developer gains the ability to create more impressive games.  This is a big reason why successive generations of console games often make dramatic improvements in performance and quality over their predecessors.

Reuse of code

A problem that stems from the giant gaps in technology between console generations is that it makes it difficult to reuse code that was written for a previous generation of console hardware.  Assembly programming is especially difficult to reuse since the CPU usually changes between consoles, but the C programming language isn’t much of a solution either, since the biggest problem is that the hardware configurations and capabilities are so different.  Any code dealing directly with the hardware or hardware influenced data structures will have to be discarded.  Even code that does something universal in nature, such as mathematical calculations, will most likely need to be rewritten since the new hardware will most likely have some sort of different mathematical model.

Also, just as the in-house tool code becomes outdated, so does game code that is written for less powerful technology.  Animation, modeling, character, environment, and particle code will all need to be discarded.

In practice, very little code can be reused between technological leaps in hardware platforms.  This means that earlier generation games will not have much code reuse, but each new generation of games for a console will be able to reuse code from its predecessors, and therefore games will tend to improve with each new generation.

Optimization

By definition, having optimal code is preferable to having bulky or less efficient code.  It would therefore seem logical to say that to achieve maximum performance from the hardware, all code should be completely optimal.  Unfortunately, this is not an easy or even practical thing to achieve, since the writing of completely optimal code has many nuances, and can be very time-consuming.  The programmer must be intimately familiar with the details of the hardware.  He must fully understand how to implement the code, such as possibly using assembly language since C compilers will often generate inefficient code.  The programmer must make certain to best utilize the CPU caches.  Also, the programmer should understand how the code may effect other pieces of code, such as the effects of the code on the instruction cache, or the amount of resources that are tied up by his code. The programmer has to know how to effectively use co-processors or other devices.  He must develop an algorithm that is maximally efficient when implemented. Also, the programmer will need to measure the code against the theoretical maximum optimal performance to be certain that the code can indeed be considered to be fully optimal.

Writing even highly optimized code for specific hardware is time-consuming, and requires a detailed knowledge of both the hardware and the algorithm to be optimized.  It is therefore commonly impractical to attempt to highly optimize even a majority of the  code.  This is especially true when writing a first generation game, since the developer is not familiar enough with the intricacies of the hardware to be very productive at writing optimal code.  Instead, it is more productive to only spend time optimizing the code that most profoundly effects the efficiency of the overall game.  Unfortunately, the identifying of what code should be optimized can also be a difficult task.  As a general rule, the code to be optimized is often the code that is executed most frequently, but this is not always the case.  Performance analyzing, testing, and profiling can help identify inefficient code, but these are also not perfect solutions, and the experience of the programmer becomes an important factor in making smart decisions concerning what code should be optimized.

As a programmer gets more familiar with the intricacies of the hardware, he will be able to perform a greater amount of optimizations.  Also, when developing later generation games, the programmer will often be able to reuse previously written optimized code.  Plus, there is often more time in the schedule of later generation titles in which to perform optimizations.  This accumulation of optimal code is a big reason why games often improve in performance in successive generations.

Other Considerations

There are many other reasons to explain the improvement in performance of next generation software that are not directly related to programming for a video game console.  For example, developers will often copy or improve upon the accomplishments of other developers.  Likewise, developers will avoid the mistakes made by others.  Also, developers acquire and lose employees fairly frequently, which creates a lot of cross-pollination of ideas and techniques between the various development houses.  These and many other reasons are important, but since they are not specific to console video game development, they have not been specifically discussed.

CLICK HERE to CONTINUE to PART 3.

 

Subscribe to the blog (on the right), or follow me at:

Andy:  or blog

Also, peek at my novel in progress: The Darkening Dream

or more posts on

GAMES or BOOKS/MOVIES/TV or WRITING or FOOD.

Crash Bandicoot – An Outsider’s Perspective (part 8)

This is part of a now lengthy series of posts on the making of Crash Bandicoot. Click here for the PREVIOUS or for the FIRST POST .

After Naughty Dog Jason and I joined forces with another game industry veteran, Jason Kay (collectively Jason R & K are known as “the Jasons”). He was at Activision at the time of the Crash launch and offers his outside perspective.

Although I would not meet Andy and Jason until after Crash 3 was released, the time around the launch of Crash Bandicoot was a fascinating time in the game business, and I believe that the launch of Crash, which was so far ahead of every other game of its generation in every aspect – technical achievement, production values, sound/music, design and balancing – caused everyone I knew in the business to rethink the games they were working on.

Warhawk: One of the best looking early PS1 games

It seems hard to imagine given the broad scope of games today — Console Games costing $50+ million, Social Games on Facebook with 100 Million monthly average users, gesture controlled games, $.99 games on iPhone – how troubled the industry was before the release of Crash, which heralded the rebirth of console games after a dormant period and ushered in the era of the mega-blockbuster game we know today. In the year that Crash Bandicoott released, only 74 Million games were sold across all platforms in the US – of which Crash accounted for nearly 5% of all games sold in the US. By 2010 – more than 200 Million games were sold, with the number one title, Call of Duty: Black Ops selling “only” 12 million copies in the US – about 6% of the total market. In some ways, adjusted for scale, Crash was as big then as Call of Duty is today.

Twisted Metal - Another of the better early PS1 games

After the incredible success of Super Mario World and Sonic the Hedgehog, the game business was really in the doldrums and it had a been a boatload of fail for the so-called “rebirth of the console”. Sega had released a series of “not-quite-next-gen” peripherals for the incumbent Sega Genesis system (including the 32x and the truly awful Sega CD), and made vague promises about “forward compatibility” with their still-secret 32 bit 3D Saturn console. When the Saturn finally shipped, it was referred to by many people as “Two lies in One”, since it was neither compatible with any previous Sega hardware, and nor was it capable of doing much E3. Sega further compounded their previous two mistakes by giving the console exclusively to then-dominant retailer Toys “R” US, pissing of the rest of the retail community and pretty much assuring that console, and eventually Sega’s, demise in the hardware business.

Wipeout - at the time it looked (and sounded) good

The PlayStation had shipped in Fall of 1995, but the initial onslaught of games all looked vaguely similar to Wipeout – since no one believed that it was possible to stream data directly from the PS1 CD-Drive, games were laboriously unpacking single levels into the PS1’s paltry 2 MB of ram (+ 1 meg vram and 0.5 meg sound ram), and then playing regular CD (“redbook”) audio back in a loop while the level played. So most games (including the games we had in development at Activision and were evaluating from third parties) all looked and played in a somewhat uninspiring fashion.

When Crash first released, I was a producer at then-upstart publisher Activision – now one of the incumbent powerhouses in the game business that everyone loves to hate – but at that time, Activision was a tiny company that had recently avoided imminent demise with the success of MechWarrior 2, which was enjoying some success as one of the first true-3D based simulations for the hardcore PC game market. To put in perspective how small Activision was at that time, full year revenues were $86.6 Million in 1996, versus over $4.45 Billion in 2010, a jump of nearly 50x.

MechWarrior 2: 31st Century Combat DOS Front Cover

Jeffrey Zwelling, a friend of a friend who had started in the game business around the same I did, worked at Crystal Dynamics as a producer on Gex. Jeffrey was the first person I knew to hear about Crash, and he tipped me off that something big was afoot right before E3 in 1996. Jeff was based in Silicon Valley, and a lot of the former Naughty Dogs (and also Mark Cerny) had formerly worked at Crystal, so his intel was excellent. He kept warning me ominously that “something big” was coming, and while he didn’t know exactly “what” it was, but it was being referred to by people who’d seen as a “Sonic Killer”, “Sony’s Mario”, and “the next mascot game”.

As soon as people got a glimpse of the game at E3 1996, the conspiracy mongering began and the volume on the Fear, Uncertainty and Doubt meter went to 11. In the pre-Internet absence of meaningful information stood a huge host of wild rumors and speculation. People “in the know” theorized that Naughty Dog had access to secret PlayStation specifications/registers/technical manuals that were only printed in Japanese and resided inside some sort of locked vault at Sony Computer Entertainment Japan. Numerous devs declared the Naughty Dog demo was “faked” in some way, running on a high-powered SGI Workstation hidden behind the curtain at Sony’s booth. That rumor seems in hindsight to have been a conflation of the fact that that the Nintendo 64 console, Code-Named “Project Reality” was in fact very similar to a Silicon Graphics Indigo Workstation and the Crash team was in fact writing and designing the game on Silicon Graphics workstations.

Tomb Raider - Crash contemporary, and great game. But the graphics...

Everyone in the business knew how “Sega had done what NintenDONT” and that they had trounced Nintendo with M-Rated games and better tiles in the 16 bit Era, and most of the bets were that Nintendo was going to come roaring back to the #1 spot with the N64. Fortunately for Nintendo, Sega’s hardware was underpowered and underwhelming and Nintendo’s N64 shipped a year later than the Playstation 1. With all the focus on many people’s attention on this looming battle, and the dismissive claims that what Naughty Dog was showing was “impossible”, most people underestimated both the PlayStation and Naughty Dog’s Crash Bandicoot.

Since no one that I knew had actually gotten a chance to play Crash at the show – the crowds were packed around the game – I fully expected that my unboxing of Crash 1 would be highly anti-climatic. I remember that Mitch Lasky (my then boss, later founder of Jamdat and now a partner at Benchmark) and I had made our regular lunch ritual of visiting Electronics Boutique [ ANDY NOTE: at Naughty Dog this was affectionately known as Electronic Buttock ] (now GameStop) at the Westside Pavilion and picked up a copy of the game. We took the game back to our PS1 in the 7th Floor Conference Room at Activision, pressed start, and the rest was history. As the camera focused on Crash’s shoes, panned up as he warped in, I literally just about sh*t a brick. Most of the programmers we had talked to who were pitching games to us claimed that it was “impossible” to get more than 300-600 polygons on screen and maintain even a decent framerate. Most of the games of that era, a la Quake, had used a highly compressed color palette (primarily brown/gray in the case of Quake) to keep the total texture memory low. It seemed like every game was going to have blocky, ugly characters and a lot of muted colors, and most of the games released on the PS1 would in fact meet those criteria.

Mario 64 - Bright, pretty, 3D, not so detailed, but the only real contender -- but on a different machine

Yet in front of us, Andy and Jason and the rest of the Crash team showed us that when you eliminate the impossible, only the improbable remains. Right before my eyes was a beautiful, colorful world with what seemed like thousands of polys (Andy later told that Crash 1 did in fact have over 1800 polygons per frame, and Crash 2 cracked 3,100 polys per frame – a far cry from what we had been told was “a faked demo” by numerous other PS1 development teams). The music was playful, curious and fun. The sound effects were luscious and the overall game experience felt, for the first time ever, like being a character in a classic Warner Brothers cartoon. Although I didn’t understand how the Dynamic Difficulty Adjustment (discussed in part 6) actually worked, I was truly amazed that it was the first game everyone I knew who played games loved to play. There was none of the frustration of being stuck on one spot for days, no simply turning the game off never to play it again – everyone who played it seemed to play it from start to finish.

For us, it meant that we immediately raised our standards on things we were looking at. Games that had seemed really well done as prototypes a few weeks before now seemed ungainly, ugly, and crude. Crash made everyone in the game business “up their game.” And game players of the world were better off for it.

 

These posts continue with PART 9 HERE. You also never know when we might add more, so subscribe to the blog (on the right), or follow us at:

Andy:  or blog

Jason:  or blog

Also, if you liked this, vote it up at at Reddit or Hacker News, and peek at my novel in progress: The Darkening Dream

or more posts on

GAMES or BOOKS/MOVIES/TV or WRITING or FOOD.

Detailed and Colorful - but most important fun

Certainly varied

Sorry for the lousy screen shots!

Making Crash Bandicoot – part 5

PREVIOUS installment, or the FIRST POST.

[ NOTE, Jason Rubin added his thoughts to all the parts now, so if you missed that, back up and read the second half of each. ]

 

A Bandicoot, his beach, and his crates

But even once the core gameplay worked, these cool levels were missing something. We’d spent so many polygons on our detailed backgrounds and “realistic” cartoon characters that the enemies weren’t that dense, so everything felt a bit empty.

We’d created the wumpa fruit pickup (carefully rendered in 3D into a series of textures — burning a big chunk of our vram — but allowing us to have lots of them on screen), and they were okay, but not super exciting.

Enter the crates. One Saturday, January 1996, while Jason and I were driving to work (we worked 7 days a week, from approximately 10am to 4am – no one said video game making was easy). We knew we needed something else, and we knew it had to be low polygon, and ideally, multiple types of them could be combined to interesting effect. We’d been thinking about the objects in various puzzle games.

So crates. How much lower poly could you get? Crates could hold stuff. They could explode, they could bounce or drop, they could stack, they could be used as switches to trigger other things. Perfect.

So that Saturday we scrapped whatever else we had planned to do and I coded the crates while Jason modeled a few, an explosion, and drew some quick textures.

About six hours later we had the basic palate of Crash 1 crates going. Normal, life crate, random crate, continue crate, bouncy crate, TNT crate, invisible crate, switch crate. The stacking logic that let them fall down on each other, or even bounce on each other. They were awesome. And smashing them was so much fun.

Over the next few days we threw crates into the levels with abandon, and formally dull spots with nothing to do became great fun. Plus, in typical game fashion tempting crates could be combined with in game menaces for added gameplay advantage. We even used them as the basis for our bonus levels (HERE in video). We also kept working on the feel and effects of crate smashing and pickup collection. I coded them again and again, going for a pinball machine like ringing up of the score. One of the best things about the crates is that you could smash a bunch, slurp up the contents, and 5-10 seconds later the wumpa and one-ups would still be ringing out.

This was all sold by the sound effects, executed by Mike Gollom for Crash 1-3. He managed to dig up the zaniest and best sounds. The wumpa slurp and the cha-ching of the one up are priceless. As one of our Crash 2 programmers used to say, “the sounds make the game look better.”

For some reason, years later, when we got around to Jak & Daxter we dropped the crate concept as “childish,” while our friends and amiable competitors at Insomniac Games borrowed them over into Ratchet & Clank. They remained a great source of cheap fun, and I scratch my head at the decision to move on.

Now, winter 95-96 the game was looking very cool, albeit very much a work-in-progress. The combination of our pre-calculation, high resolution, high poly-count, and 30 fps animation gave it a completely unique look on the machine. So much so that many viewers thought it a trick. But we had kept the whole project pretty under wraps. One of the dirty secrets of the Sony “developer contract” was that unlike its more common “publisher” cousin, it didn’t require presentation to Sony during development, as they assumed we’d eventually have to get a publisher. Around Thanksgiving 1995, I and one of our artists, Taylor Kurosaki, who had a TV editing background, took footage from the game and spent two days editing it into a 2 minute “preview tape.” We deliberately leaked this to a friend at Sony so that the brass would see it.

They liked what they saw.

Management shakeups at Sony slowed the process, but by March of 1996 Sony and Universal had struck a deal for Sony to do the publishing. While Sony never officially declared us their mascot, in all practical senses we became one. Heading into the 1996 E3 (May/June) we at Naughty Dog were working ourselves into oblivion to get the whole game presentable. Rumors going into E3 spoke of Nintendo’s new machine, the misleadingly named N64 (it’s really 32 bit) and Miyamoto’s terrifying competitive shadow, Mario 64.

Crash and his girl make a getaway

For two years we had been carefully studying every 3D character game. Hell, we’d been pouring over even the slightest rumor – hotly debated at the 3am deli takeout diners. Fortunately for us, they’d all sucked. Really sucked. Does anyone remember Floating Runner? But Mario, that wasn’t going to suck. However, before E3 1996 all we saw were a couple of screen shots – and that only a few weeks before. Crash was pretty much done. Well, at least we thought so.

Now, we had seen some juicy magazine articles on Tomb Raider, but we really didn’t worry much about that because it was such a different kind of game: a Raiders of the Lost Ark type adventure game starring a chick with guns. Cool, but different. We’d made a cartoon action CAG aimed at the huge “everybody including kids” market.

Mario  was our competition.

 

Jason says:

The empty space had plagued us for a long time.  We couldn’t have too many enemies on screen at the same time.  Even though the skunks or turtles were only 50-100 polygons each, we could show two or three at most.  The rest was spent on Crash and the Background.  Two or three skunks was fine for a challenge, but it meant the next challenge either had to be part of the background, like a pit, or far away.  If two skunk challenges came back to back there was a huge amount of boring ground to cover between them.

Enter the crates.   The Crates weren’t put in to Crash until just before Alpha, or the first “fully playable” version of the game.

Andy must have programmed the “Dynamite Crate/Crate/Dynamite Crate” puzzle 1000 times to get it right.  It is just hard enough to spin the middle crate out without blowing up the other two, but not hard enough not to make it worth trying for a few wumpa fruit.  Getting someone to risk a Life for 1/20th of a Life is a fine balancing act!

Eventually the Crates led to Crash’s name.  In less than a month after we put them in everyone realized that they were the heart of the game.  Crash’s crash through them not only filled up the empty spots, the challenges ended up filling time between Crate challenges!

This isn’t the place for an in depth retelling of the intrigue behind the Sony/Crash relationship, but two stories must be told.

The first is Sony’s first viewing of Crash in person.  Kelly Flock was the first Sony employee to see Crash live [ Andy NOTE: running, not on videotape ].  He was sent, I think, to see if our videotape was faked!

Kelly is a smart guy, and a good game critic, but he had a lot more to worry about than just gameplay.  For example, whether Crash was physically good for the hardware!

Andy had given Kelly a rough idea of how we were getting so much detail through the system: spooling.  Kelly asked Andy if he understood correctly that any move forward or backward in a level entailed loading in new data, a CD “hit.”  Andy proudly stated that indeed it did.  Kelly asked how many of these CD hits Andy thought a gamer that finished Crash would have.  Andy did some thinking and off the top of his head said “Roughly 120,000.”  Kelly became very silent for a moment and then quietly mumbled “the PlayStation CD drive is ‘rated’ for 70,000.”

Kelly thought some more and said “let’s not mention that to anyone” and went back to get Sony on board with Crash.

The second story that can’t be glossed over was our first meeting with the Sony executives from Japan.  Up until this point, we had only dealt with Sony America, who got Crash’s “vibe”.  But the Japanese were not so sure.

We had been handed a document that compared Crash with Mario and Nights, or at least what was known of the games at the time.  Though Crash was rated favorably in “graphics” and some other categories, two things stood out as weaknesses.  The first was that Sony Japan didn’t like the character much, and the second was a column titled “heritage” that listed Mario and Sonic as “Japanese” and Crash as “other.”  The two negatives were related.

Let us remember that in 1995 there was Japan, and then there was the rest of the world in video games.  Japan dominated the development of the best games and all the hardware.  It is fair to say that absent any other information, the Japanese game WAS probably the better one.

Mark presided over the meeting with the executives.  He not only spoke Japanese, but also was very well respected for his work on Sonic 2 and for his years at Sega in Japan.  I could see from the look in Mark’s eyes that our renderings of Crash, made specifically for the meeting, did not impress them.

We took a break, during which it was clear that Sony was interested in Crash for the US alone, hardly a “mascot” crowning.  I stared at the images we had done.  Primitive by today’s standards, but back then they were reasonably sexy renderings that had been hand retouched by Charlotte for most of the previous 48 hours.  She was fried.

I walked over to her.  I think she could barely hold her eyes open.  I had spent the previous month spending all of my free time (4am-10am) studying Anime and Manga.  I read all the books available at that time in English on the subject.  All three!  I also watched dozens of movies.  I looked at competitive characters in the video game space.  I obsessed, but I obsessed from America.  I had never been to Japan.

I asked Charlotte if she could close Crash’s huge smiling mouth making him seem less aggressive.   I asked her to change Crash’s eyes from green to two small black “pac-man” shapes.  And I asked her to make Crash’s spike smaller.  And I told her she had less than 15 minutes.  With what must have been her last energy she banged it out.

I held up the resulting printout 15 minutes later.

Sony Japan bought off on Crash for the international market.

I don’t want to make the decision on their part seem arbitrary.  Naughty Dog would do a huge amount of work after this on the game for Japan, and even then we would always release a Japanese specific build.  Whether it was giving Aku Aku pop up text instructions, or replace a Crash smashing “death” that reminded them of the severed head and shoes left by a serial killer that was loose in Japan during Crash 2’s release, we focused on Japan and fought hard for acceptance and success.

We relied on our Japanese producers, including Shuhei Yoshida, who was assigned shortly after this meeting, to help us overcome our understandable ignorance of what would work in Japan.  And Sony Japan’s marketing department basically built their own Crash from the ground up for the marketing push.

Maybe Charlotte’s changes showed Sony that there was a glimmer of hope for Crash in Japan.  Maybe they just saw how desperate we were to please and couldn’t say no.  Maybe Universal put something in the coffee they had during the break.

Who knows, but Crash was now a big part of the international PlayStation push.  So there were more important things for us to worry about then Sony and the deal:

The fear of Miyamoto was thick at Naughty Dog during the entire Crash development period.  We knew eventually he would come out with another Mario, but we were hoping, praying even, that it would be a year after we launched.

Unfortunately that was not to be.  We started seeing leaks of video of the game.

It was immediately obvious that it was a different type of game: truly open.  That scared us.  But when we saw the graphics we couldn’t believe it.  I know there will be some that take this as heresy, but when we saw the blocky, simple, open world we breathed a sign of relief.  I think I called it I Robot Mario, evoking the first 3D game.

Of course we hadn’t played it, so we knew we couldn’t pass judgment until we did.  That would happen at E3.


CONTINUED in PART 6 or

more on GAMES or BOOKS/MOVIES/TV or WRITING or FOOD.

The Big Fight!

Making Crash Bandicoot – part 4

PREVIOUS installment, or the FIRST POST.

[ NOTE, Jason Rubin added his thoughts to all the parts now, so if you missed that, back up and read the second half of each. ]

 

But this brings us to the gameplay. We were forging new ground here, causing a lot of growing pains. I started fairly programming the control of the main character early. This is the single most important thing in a CAG, and while intellectually I knew this from Way of the Warrior, it was really Mark who drove the message home. I did all the programming, but Mark helped a lot with the complaining. For example, “he doesn’t stop fast enough,” or “he needs to be able to jump for a frame or two AFTER he’s run off a cliff or it will be frustrating.” Jason’s also really good flaw detection. Which is a good thing. Internal criticism is essential, and as a programmer who wrote dozens of world class control schemes in the years between 1994 and 2004, I rewrote every one at least five or six times. Iteration is king.

Even after the control was decent, we still had no idea how to build good 3D gameplay with it. Our first two test levels “the jungle, level1” and “lava cave, level2” were abysmal, and neither shipped in the final game. First of all, they were too open with way too many polygons. Level1 had over 10 million, whereas a shipping level tended to have around a million (a lot back then). Level2 was better, but not much.

So during the summer of 1995 we retrenched and tried to figure out how to make a level that was actually fun. The F word is the most important concept in making games. Too many forget this.

But Mark – who served the practical function of producer – never let us.

By this time most of the art design for the game was complete, including the vast layout of possible looks and levels, but we skipped to about 2/3 through and used Cortex’s factory levels to really focus on fun. Our first successful level was essentially 2D (“Heavy Machinery”). It was all rendered in 3D, but the camera watched from the side like a traditional platformer. Here we combined some classic devices like steam vents, drop platforms, bouncy pads, hot pipes, and monsters that tracked back and forth in simple patterns. This was in essence a retreat to success, as it employed the basic kind of techniques that Donkey Kong Country had used so successfully. This palate of objects would be arranged in increasingly more difficult combination.

It worked. Thank God.

Simultaneously, we were working on a more ambitious level where the camera sat above and “Willie” walked both into/out and side to side (“Generator Room”). This factory level included drop platforms, moving platforms, dangerous pipes, and various robots. By using a more mechanical setting, and briefly forgoing the complex organic forest designs we were able to distill this two axis gameplay and make it fun. In both areas we had to refine “Willie’s” jumping, spinning, and bonking mechanics.

We then got our third type of level working (“Cortex Power”). This involved having the camera behind the character, over his shoulder, in the original “Sonic’s ass” POV that had faired miserably with level1 and level2. By taking some of the new creatures and mechanics, and combining them with hot pipes and slime pits we were able to make it work in this more factory-like setting.

Having learned these lessons, we turned back to the jungle design with a new jungle level, known as “levelc” (“Jungle Rollers”). This used some of the pieces from the failed level1, but arranged as a corridor between the trees, much like the over-the-shoulder factory level. Here we utilized pits, skunks on paths, stationary plants, and rollers to create the palate of obstacles. With this level the into-the-screen gameplay really came into its own, and it remains one of my favorite levels. Each element served its purpose.

Rollers (big stone wheels that could crush the player, and rolled from side to side) provided timing gates. They could be doubled or tripled up for more challenge.

Skunks traveled down the path tracking back and forth toward the player, requiring him to attack them or jump over them.

Fallen logs, tikis, and pits needed to be jumped over.

Stationary plants could strike at the player, requiring one to tease them into a strike, then jump on their heads.

Once we had these three level types going things really begun to get on a roll. For each level art design, like jungle, we would typically do 2-3 levels, the first with the introductory set of challenges, and then the later ones adding in a few new twists combined at much harder difficulty. For example in the sequel to the jungle level we added drop platforms and moving platforms. The elements combined with the characters mechanics to form the fun.

It’s also worth noting that we stumbled onto a few of our weirder (and most popular) level designs as variants of the over-the-shoulder. First “Boulders,” aping that moment from Raiders of the Lost Ark when the giant stone ball starts rolling toward Indy. For this we reversed the action and had the character run into the screen. This proved so successful that we riffed on it again in Crash 2 and 3. Same with “Hog Wild,” in which the character jumps on the bag of a wild “hog ride” and is dragged at high speed through a frenetic series of obstacles.

Jason says:

Making games is no game.  So many aspiring designers think that all you do is come up with a great idea and the sit around and play.  That may be true if you are aping something that exists, like making just another first person shooter (this time in ancient Sumeria and with Demon Aliens!), or making something small and easy to iterate, but it is certainly NOT true when you are trying something new in the AAA space.

And to make matters worse, the LAST person who can attest to a good game design is the game designer.  Not only do they know what to do when they test it, but they are also predisposed to like it.

Oh no, the proper test is to hand it to a complete noob, in Crash’s case the ever rotating list of secretaries and clerical staff that worked at Universal.   For many of them it was their first time touching a controller, and they succeeded immediately in failing, miserably, to get a single challenge passed.  As they smiled and tried to be positive they were saying “this sucks” with their hands.  Thus a good designer has to both dread and seeks out other people’s advice, especially those most likely to hate the work he has done.  And the designer has to accept the third party opinion over theirs.  Every time.  Only when the noobs start completing challenges and smile WHILE PLAYING do you know you are getting somewhere.

I don’t know why, but I have always had an innate ability to see the flaws in my own projects, even after they are “final” in everyone else’s eyes.   Naughty Dog graphic engine coder Greg Omi, who joined for Crash 2, once said I could spot a single pixel flicker on his monitor at 30 yards while holding a conversation with someone else and facing the opposite direction.  Whatever it is, I get a weird frustrated sweat when I see something wrong.  Mark Cerny has the same “talent.”

The two of us were always unhappy with the gameplay.  I don’t mean just the early gameplay, I mean always unhappy with the gameplay, period.  I know in retrospect that I was to hard on the team quite often because of this, and that perhaps more often than not I was too poignant when voicing my frustration (letting myself of easy here!), but I think a certain amount of frustration and pain is inherent in making gameplay success.

Stripping the game down to familiar 2D, and then building from there to levels that contained only platforms floating in space was the crutch we used to get to the jungle levels that made Crash such a success.  In the end, these levels aren’t that different in gameplay design.  But starting with the Jungle was too big a leap.  We needed simple.  Upon simple we built complex.

Andy has done a good job of compressing a year of design hell into a blog-sized chunk.  With all our technical and art successes, the game could not have succeeded without good gameplay.  This was by far the hardest part of making Crash Bandicoot.

Dave and Andy’s code, Justin’s IT and coloring, Charlotte Francis’s textures, And Bob, Taylor and my backgrounds and characters would have been worth nothing if Crash hadn’t played well.

Jason, Andy, Dave, Bob, Taylor, Justin, Charlotte

CONTINUED in PART 5 or

more on GAMES or BOOKS/MOVIES/TV or WRITING or FOOD