Crash Bandicoot – Teaching an Old Dog New Bits – part 1

This is loosely part of a now lengthy series of posts on the making of Crash Bandicoot. Click here for the PREVIOUS or for the FIRST POST .

Below is another journal article I wrote on making Crash in 1999. This was co-written with Naughty Dog uber-programmer Stephen White, who was my co-lead on Crash 2, Crash 3, Jak & Daxter, and Jak 2. It’s long, so I’m breaking it into three parts.

 

Teaching an Old Dog New Bits

How Console Developers are Able to Improve Performance When the Hardware Hasn’t Changed

by

Andrew S. Gavin

and

Stephen White

Copyright © 1994-99 Andrew Gavin, Stephen White, and Naughty Dog, Inc. All rights reserved.

 

Console vs. Computer

Personal computers and video game consoles have both made tremendous strides in graphics and audio performance; however, despite these similarities there is a tremendous benefit in understanding some important differences between these two platforms.

Evolution is a good thing, right?

The ability to evolve is the cornerstone behind the long-term success of the IBM PC.  Tremendous effort has been taken on the PC so that individual components of the hardware could be replaced as they become inefficient or obsolete, while still maintaining compatibility with existing software.  This modularity of the various PC components allows the user to custom build a PC to fit specific needs.  While this is a big advantage in general, this flexibility can be a tremendous disadvantage for developing video games.  It is the lack of evolution; the virtual immutability of the console hardware that is the greatest advantage to developing high quality, easy to use video game software.

You can choose any flavor, as long as it’s vanilla

The price of the PC’s evolutionary ability comes at the cost of dealing with incompatibility issues through customized drivers and standardization.  In the past, it was up to the video game developer to try to write custom code to support as many of the PC configurations as possible.  This was a time consuming and expensive process, and regardless of how thorough the developer tried to be, there were always some PC configurations that still had compatibility problems.  With the popularity of Microsoft’s window based operating systems, video game developers have been given the more palatable option of allowing other companies to develop the drivers and deal with the bulk of the incompatibility issues; however, this is hardly a panacea, since this necessitates a reliance on “unknown” and difficult to benchmark code, as well as API’s that are designed more for compatibility than optimal performance.  The inherit cost of compatibility is compromise.  The API code must compromise to support the largest amount of hardware configurations, and likewise, hardware manufacturers make compromises in their hardware design in order to adapt well to the current standards of the API.  Also, both the API and the hardware manufacturers have to compromise because of the physical limitations of the PC’s hardware itself, such as bus speed issues.

Who’s in charge here?

The operating system of a PC is quite large and complicated, and is designed to be a powerful and extensively featured multi-tasking environment.  In order to support a wide variety of software applications over a wide range of computer configurations, the operating system is designed as a series of layers that distance the software application from the hardware.  These layers of abstraction are useful for allowing a software application to function without concerning itself with the specifics of the hardware.  This is an exceptionally useful way of maintaining compatibility between hardware and software, but is unfortunately not very efficient with respect to performance.  The hardware of a computer is simply a set of interconnected electronic devices.  To theoretically maximize the performance of a computer’s hardware, the software application should write directly to the computer’s hardware, and should not share the resources of the hardware, including the CPU, with any other applications.  This would maximize the performance of a video game, but would be in direct conflict with the implementations of today’s modern PC operating systems.  Even if the operating system could be circumvented, it would then fall upon the video game to be able to support the enormous variety of hardware devices and possible configurations, and would therefore be impractical.

It looked much better on my friend’s PC

Another problem with having a large variety of hardware is that the video game developer cannot reliably predict a user’s personal set-up.  This lack of information means that a game can not be easily tailored to exploit the strengths and circumvent the weaknesses of a particular system.  For example, if all PC’s had hard-drives that were all equally very fast, then a game could be created that relied on having a fast hard-drive.  Similarly, if all PC’s had equally slow hard-drives, but had a lot of memory, then a game could compensate for the lack of hard-drive speed through various techniques, such as caching data in RAM or pre-loading data into RAM.  Likewise, if all PC’s had fast hard-drives, and not much memory, then the hard-drive could compensate for the lack of much memory by keeping most of the game on the hard-drive, and only spooling in data as needed.

Another good example is the difference between polygon rendering capabilities.  There is an enormous variation in both performance and effects between hardware assisted polygonal rendering, such that both the look of rendered polygons and the amount of polygons that can be rendered in a given amount of time can vary greatly between different machines.  The look of polygons could be made consistent by rendering the polygons purely through software, however, the rendering of polygons is very CPU intensive, so may be impractical since less polygons can be drawn, and the CPU has less bandwidth to perform other functions, such as game logic and collision detection.

Other bottlenecks include CD drives, CPU speeds, co-processors, memory access speeds, CPU caches, sound effect capabilities, music capabilities, game controllers, and modem speeds to name a few.

Although many PC video game programmers have made valiant attempts to make their games adapt at run-time to the computers that they are run on, it is difficult for a developer to offer much more than simple cosmetic enhancements, audio additions, or speed improvements.  Even if the developer had the game perform various benchmark tests before entering the actual game code, it would be very difficult, and not to mention limiting to the design of a game, for the developer to write code that could efficiently structurally adapt itself to the results of the benchmark.

Which button fires?

A subtle, yet important problem is the large variety of video game controllers that have to be supported by the PC.  Having a wide variety of game controllers to choose from may seem at first to be a positive feature since having more seems like it should be better than having less, yet this variety actually has several negative and pervasive repercussions on game design.  One problem is that the game designer can not be certain that the user will have a controller with more than a couple of buttons.  Keys on the keyboard can be used as additional “buttons”, but this can be impractical or awkward for the user, and also may require that the user configure which operations are mapped to the buttons and keys.  Another problem is that the placement of the buttons with respect to each other is not known, so the designer doesn’t know what button arrangement is going to give the user the best gameplay experience.  This problem can be somewhat circumvented by allowing the user to remap the actions of the buttons, but this isn’t a perfect solution since the user doesn’t start out with an inherent knowledge of the best way to configure the buttons, so may choose and remain using an awkward button configuration.  Also, similar to the button layout, the designer doesn’t know the shape of the controller, so can’t be certain what types of button or controller actions might be uncomfortable to the user.

An additional problem associated with game controllers on the PC is that most PC’s that are sold are not bundled with a game controller.  This lack of having a standard, bundled controller means that a video game on the PC should either be designed to be controlled exclusively by the keyboard, or at the very least should allow the user to optionally use a keyboard rather than a game controller.  Not allowing the use of the keyboard reduces the base of users that may be interested in buying your game, but allowing the game to be played fully using the keyboard will potentially limit the game’s controls, and therefore limit the game’s overall design.

Of course, even if every PC did come bundled with a standard game controller, there would still be users who would want to use their own non-standard game controllers.  The difference, however, is that the non-standard game controllers would either be specific types of controllers, such as a steering wheel controller, or would be variations of the standard game controller, and would therefore include all of the functionality of the original controller.  The decision to use the non-standard controller over the standard controller would be a conscious decision made by the user, rather than an arbitrary decision made because there is no standard.

Chasing a moving target

Another problem associated with the PC’s evolutionary ability is that it is difficult to predict the performance of the final target platform.  The development of video games has become an expensive and time consuming endeavor, with budgets in the millions, and multi year schedules that are often unpredictable.  The PC video game developer has to predict the performance of the target machine far in advance of the release of the game, which is difficult indeed considering the volatility of schedules, and the rapid advancements in technology.  Underestimating the target can cause the game to seem dated or under-powered, and overestimating the target could limit the installed base of potential consumers.  Both could be costly mistakes.

Extinction vs. evolution

While PC’s have become more powerful through continual evolution, video game consoles advance suddenly with the appearance of an entirely new console onto the market.  As new consoles flourish, older consoles eventually lose popularity and fade away.  The life cycle of a console has a clearly defined beginning:  the launch of the console into the market.  The predicted date of the launch is normally announced well in advance of the launch, and video game development is begun early enough before the launch so that at least a handful of video game titles will be available when the console reaches the market.  The end of a console’s life cycle is far less clearly defined, and is sometimes defined to be the time when the hardware developer of the console announces that there will no longer be any internal support for that console.  A more practical definition is that the end of a console’s life cycle is when the public quits buying much software for that console.  Of course, the hardware developer would want to extend the life cycle of a console for as long as possible, but stiff competition in the market has caused hardware developers to often follow up the launch of a console by immediately working on the design of the next console.

Each and every one is exactly the same

Unlike PC’s which can vary wildly from computer to computer, consoles of a particular model are designed to be exactly the same.  Okay, so not exactly the same, but close enough that different revisions between the hardware generally only vary in minor ways that are usually pretty minor from the perspective of the video game developer, and are normally transparent to the user.  Also, the console comes with at least one standard game controller, and has standardized peripheral connections.

The general premise is that game software can be written with an understanding that the base hardware will remain consistent throughout the life-span of the console; therefore, a game can be tailored to both exploit the strengths of the hardware, and to circumvent the weaknesses.

The consistency of the hardware components allows a console to have a very small, low level operating system, and the video game developer is often given the ability to either talk to the hardware components directly, or to an extremely low hardware abstraction layer.

The performance of the components of the hardware is virtually identical for all consoles of a given model, such that the game will look the same and play the same on any console.  This allows the video game developer to design, implement, and test a video game on a small number of consoles, and be assured that the game will play virtually the same for all consoles.

CLICK HERE FOR PART 2


Subscribe to the blog (on the right), or follow me at:

Andy:  or blog

Also, peek at my novel in progress: The Darkening Dream

or more posts on

GAMES or BOOKS/MOVIES/TV or WRITING or FOOD.

Making Crash Bandicoot – GOOL – part 9

This is part of a now lengthy series of posts on the making of Crash Bandicoot. Click here for the PREVIOUS or for the FIRST POST. I also have a newer post on LISP here.

I’m always being asked for more information on the LISP based languages I designed for the Crash and Jak games. So to that effect, I’m posting here a journal article I wrote on the subject in 1996. This is about GOOL, the LISP language used in Crash 1, Crash 2, and Crash 3. GOOL was my second custom language. Way of the Warrior had a much simpler version of this. Jak 1,2,3 & Jak X used GOAL, which was a totally new vastly superior (and vastly more work to create) implementation that included a full compiler. GOOL (the subject of this article) was mostly interpreted, although by Crash 2 basic expressions were compiled into machine code. But I’ll save details on GOAL for another time.

[ Also I want to thank my reader “Art” for helping cleanup an ancient copy of this article — stuck in some mid 90s Word format that can no longer be read. ]

_

Making the Solution Fit the Problem:

AI and Character Control in Crash Bandicoot

Andrew S. Gavin

Copyright (c) 1996 Andrew Gavin and Naughty Dog, Inc.

All rights reserved.

Abstract

Object control code, which the gaming world euphemistically calls AI, typically runs only a couple of times per frame. For this kind of code, speed of implementation, flexibility, and ease of later modification are the most important requirements. This is because games are all about gameplay, and good gameplay only comes from constant experimentation with and extensive reworking of the code that controls the game’s objects. The constructs and abstractions of standard programming languages are not well suited to object authoring, particularly when it comes to flow of control and state. GOOL (Game Oriented Object LISP) is a compiled language designed specifically for object control code that addresses these limitations.

Video games are the type of program which most consistently pushes the machine and programmer to the limit. The code is required run at blinding speeds, fit in tiny memory footprints, have no serious bugs, and be completed under short schedules. For the ultra high performance 5% of functions which run most frequently there is no substitute for careful hand coding in assembly. However, the rest of the program requires much more rapid implementation. It is this kind of code which is the subject of this article.

Object control code, which is euphemistically called AI in games, typically runs only a couple of times per frame. With this kind of code, speed of implementation, flexibility, and ease of later modification are often more important than maximizing execution time. This is because games are all about gameplay, and achieving good gameplay is about writing and rewriting object code time and time again. Programming languages are not immutable truths handed down from on high, but tools created by people to solve particular tasks. Like any tool, a programming language must be right for the job. One would not attempt to turn a hexagonal nut with a pentagonal wrench, neither is it easy to write a program in a language not well suited to the problem. Sadly, most programmers have only been exposed to a small set of similar and inflexible languages. They have therefore only learned a small number of ways to customize these languages to the task at hand. Let us stop for a second and take look at the abstractions given to us by each of the common choices, and at what price. But first a word about assemblers and compilers in general.

Assemblers and compilers are programs designed to transform one type of data (the source language) into another (the target language). There is nothing particularly mysterious about them, as these transforms are usually just a bunch of tabled relationships. We call one of these programs a compiler when it performs some kind of automatic allocation of CPU resources. Since most commonly found languages are fairly static, the transform rules are usually built into the compiler and can not be changed. However, most compilers offer some kind of macro facility to allow customizations.  In its most general form a macro is merely a named function, which has a rule for when it is activated (i.e. the name of the macro). When it is used, this function is given the old piece of program, and can do whatever normal programming functions it wishes to calculate a new expression, which it returns to be substituted for the old. Unfortunately, most languages do not use this kind of macro, but instead create a new tiny language which defines all the functions which are allowed to run during macro expansion (typically template matching of some sort). With general purpose macros, any transform is possible, and they can even be used to write an entire compiler.

Almost all programmers have had some exposure to assembly language. An assembler basically serves the purpose of converting symbolic names into binary values. For example, “add” becomes 20. In addition, most allow simple renaming of registers, assignment of constants to symbols, evaluation of constant expressions, and some kind of macro language.  Usually these macro languages consist of template substitutions, and the ability to run simple loops at expansion time. Assembly directly describes the instructions interpreted by the processor, and as such is the closest to the chip which a software engineer can get. This makes it very tedious and difficult to port to a different a machine. Additionally, since it consists primarily of moving binary words between registers and memory, and performing simple operations on them, most people find it tedious and difficult to use for large programs. In assembly, all details must be tracked by hand. Since knowledgeable humans are smarter than compilers, albeit much slower, they are capable of doing a substantially better job of creating smaller more efficient code. This is true, despite the claims of the modern OS community, compilers are still only about half as good as a talented assembly programmer. They just save a lot of time.

Many programmers learned to program with Basic. This language has an incredibly simple syntax, and typically comes with a friendly interactive environment. These features make it easy to learn and use. It however has no support for abstractions of any sort, possessing only global variables, and no macro system. Globals are great for beginners because the whole abstract arena of scope becomes a non issue. However, the absence of lexical scoping makes the isolation of code (and its associated bugs) nearly impossible. There is however an important lesson in basic which has been lost on the programming community: interactive is good. Basic typically has an interpreted listener, and this enables one to experiment very quickly with expressions to see how they work, and to debug functions before they are put into production.

The standard programming language of the last few years is C. First and foremost C provides series of convenient macros for flow control, arithmetic operations, memory reference, function calling, and structure access. The compiler writer makes expansions for these that work in the target machine language. C also provides expression parsing and simple register allocation for assembler grade data objects (i.e. words). C code is reasonably portable between machines of a similar generation (i.e. word size). As an afterthought a preprocessor provides rudimentary textual macro expansion and conditional compilation. The choice not to include any of the hallmarks of higher level languages, like memory management (aka garbage collection), run time typing, run time linking, and support for more complex data types (lists, true arrays, trees, hash tables etc.) is a reasonable one for many tasks where efficiency is crucial. However, C is crippled by an inconsistent syntax, a weak text based macro system, and an insurmountable barrier between run time and compile time name spaces. C can only be customized via the #define operator and by functions. Unfortunately, this makes it impossible to do many interesting and easy things, many of C’s fundamental areas, structures, setting, getting, expressions, flow of control, and scope are completely off limits for customization. Since functions always have a new scope, they are not useful creating flow of control constructs, and #define is so weak that it can’t even handle the vagaries of the structure syntax. For those who know C very well it is often a convenient language, since it is good at expressions and basic flow of control. However, whenever complicated data structures are involved the effort needed is obscene, and C in unable to transfer this effort from one data type to another similar one.

Modern operating system and fancy programs are filled with scripting languages.  MS DOS batch language, the various Unix shell languages, perl, tcl etc. are all very common.  These are toy languages. They often have inconsistent and extremely annoying syntaxes, no scoping, and no macros. They were invented basically as macro languages for operating system shells, and as such make it fairly easy to concatenate together new shell commands (a task that is very tedious in assembly or C). However, their ambiguous and inconsistent syntaxes, their slow interpreted execution speeds, and the proliferation of too many alternatives has made them annoying to invest time in learning.  Recently a new abomination has become quite popular, and its name is C++. This monstrosity of a language attempts to extend C in a direction it was never intended, by making structures able to contain functions.  The problem is that the structure syntax is not very flexible, so the language is only customizable in this one direction. Hence one is forced to attempt to build all abstractions around the idea of the structure as class. This leads to odd classes which do not represent data structures, but instead represent abstract ways of doing. One of the nice things about C is that the difference between pointer and object is fairly clear, but in C++ this has become incomprehensibly vague, with all sorts of implicit ways to pass by reference. C++ programs also tend to be many times larger and slower than their C counterparts, compile much slower, and because C++ compilers are written in C, which can not handle flexible data structures well, the slightest change to the source code results in full compiles of the entire source tree. I am convinced that this last problem alone makes the language a severe productivity minus. But I forgot, since C++ must determine nearly everything at compile time you still have to write all the same code over and over again for each new data type.

The advent of the new class metaphor has brought to the fore C and C++’s weakness at memory management. Programmers are forced to create and destroy these new objects in a variety of bizarre fashions. The heap is managed by the wretched malloc model, which uses wasteful memory cookies, creates mysterious crashes on overwrites, and endless fragmentation.

None of these problems are present in Lisp, which is hands down the most flexible language in common use.  Lisp is an old language (having its origins in the 50s) and has grown up over the last 30 years with the evolution of programming. Today’s modern Common Lisp is a far cry from the tiny mainframe list of 30 years ago. Aided by a consistent syntax which is trivial to parse, and the only full power macro system in a commonly used language, Lisp is extremely easy to update, customize, and expand, all without fighting the basic structures of the language. Over the years as lexical scoping, optimized compilation, and object oriented programming each came into vogue Lisp was able to gracefully adopt them without losing its unique character. In Lisp programs are built out of one of the language’s built in data structure, the list.  The basic Lisp expression is the form. Which is either an atom (symbol or number) or a list of other forms. Surrounded by parentheses, a Lisp lists always has its function at the head, for example the C expression 2+2 is written as (+ 2 2). This may seem backwards at first, but with this simple rule much of the ambiguity of the syntax is removed from the language. Since computers have a very hard time with ambiguity, programs that write programs are much easier in Lisp.

Let me illustrate beginning with a simple macro.

(defmacro (1+ value)
	"Simple macro to expand (1+ value) into (+ 1 value).
	Note that backquote is used.  Backquote is a syntax
	sugar which says to return the 'quoted' list, all
	forms following a comma however are evaluated
	before being placed in the list. This allows the
	insertion of fields into a template.
	1+ is the operator which adds 1 to its operand
	(C uses ++ for this)."
  `(+ 1 ,value))

The above form defines a function which takes as its argument the expression beginning with 1+, and returns a new expanded expression (i.e. (1+ 2) > (+ 1 2)). This is a very simple macro because it merely fills in a template. However, if our compiler did not perform constant reduction we could add it to this macro like this:

(defmacro (1+ value)
	"Smarter macro to expand 1+.  If value is a number,
	then increment on the spot and return the new
	number as the expansion."
  (if (numberp value)
      (+ 1 value)
    `(+ 1 ,value)))

The form numberp tests if something is a number. If value is, we do the add in the context of the expansion, returning the new number as the result of the macro phase. If value is not a number (i.e. it is a variable or expression), we return the expanded expression to be incremented at run time.

These full power macros allow the programmer to seamlessly expand the language in new ways. For example, the lisp form cond can be implemented from if’s with a macro. Cond is a special form which is like a C “switch” statement except that each case has an arbitrary expression. For example:

(cond
  ((= temp 2)
   (print 'two))
  ((numberp temp)
   (print 'other number))
  (t
   (print 'other type)))

Will print “two” if temp is 2, “other number” if it is a number (other than 2), and “other type” otherwise. A simple implementation of cond would be as follows:

(defmacro cond (&rest clauses)
	"Implement the standard cond macro out of nested
	'if's and 'when's. t must be used to specify the
	default case, and it must be used last. This macro
	uses backquote's ,@ syntax which splices a list
	into the list below it. Note also the use of progn.
	progn is a form which groups multiple forms and has
	as it's value, the value of the last form. cond
	clauses contain what is called an implicit progn,
	they are grouped together and the value of the
	last one is returned as the value of the cond."
  (if (eq (length clauses) 1)
      (if (eq (caar clauses) t)
          `(progn ,@(cdar clauses))
        `(when ,(caar clauses)
            ,@(cdar clauses)))
    `(if ,(caar clauses)
         (progn ,@(cdar clauses))
       (cond ,@(cdr clauses)))))

This expands the above cond into:

  (if (= temp 2)
      (progn (print 'two))
    (cond
      ((numberp temp)
       (print 'other number))
      (t
       (print 'other type))))

After a single pass of macro expansion. The macro will peel the head off of the cond one clause at a time converting it into nested ifs. There is no way to use C’s #define to create a new flow of control construct like this, yet in a compiled language these compile time transforms are invaluable to bridging the gap between efficient and readable code.

GOOL (Game Oriented Object LISP) is my answer to the difficulties of using C and assembly for object programming. It is a compiled Lisp dialect designed specifically for the programming of interactive game objects. As a language it has the following features: Consistent syntax, full power macros, symbolic names, orthogonal setting/getting, layered computation, multiple ultra light threads, grouping of computations into states, externally introduced flow of control changes (events), small execution size, retargetable backend, and dynamic linking. The GOOL compiler is embedded in Allegro Common Lisp (an ANSI Common Lisp from Franz Inc. which I run on an Silicon Graphics workstation running IRIX). Common Lisp provides an ideal environment for writing compilers because you start off with parsing, garbage collection, lists, trees, hash tables, and macros from the get go. As a language GOOL borrows its syntax and basic forms from Common Lisp.  It has all of Lisp’s basic expression, arithmetic, bookkeeping, and flow of control operators.  These vary in many small ways for reasons of speed or simplicity, but GOOL code can basically be read by the few of us lucky enough to have been exposed to Lisp. GOOL is also equipped with 56 primitives and 420 macros which support its advanced flow of control and game object specific operations. Additional ones can be trivially defined globally or locally within objects, and are indistinguishable from more primitive operations.

The GOOL compiler is an modern optimizing compiler with all sorts of smarts built into various macros and primitives. It is a fully forward referenced single pass compiler. Unlike some other programming languages with single letter names, GOOL does not require you to define something textually before you use it, and you never need tertiary declarations (like prototypes). Computers are good at remembering things, and a compiler is certainly able to remember that you called a function so that it can check the arguments when it gets to the declaration of that function. GOOL is fully relocatable and dynamically linked. So it is not necessary to include code for objects which are not nearby in memory. C is so static, and overlays so difficult and incompatible, that almost no effort is made to do dynamic binding of code, resulting in much wasted memory.

The programming tasks involved in creating game object behaviors are very inconvenient under the standard functional flow of control implied by most programming languages. In the programming of game objects it is necessary for each object to have a local state. This state consists of all sorts of information: type, position, appearance, state, current execution state (program counter), and all types of other state specific to the type of object. From the point of view of a particular object’s code all this can be viewed as an object specific global space and a small stack. This state must be unique to a specific object because it is often necessary to execute the same code on many different copies of the state. In either C or assembly it is typical to make some kind of structure to hold the state, and then write various routines or code fragments that operate on the structure. This can be accomplished either with function syntax macros or structure references. GOOL on the other hand allows this state to be automatically and conveniently bound to variable names for the most straightforward syntax. For example the C:

object >transx = object >transx + immediate_meters(4);

becomes in GOOL the similar expression:

(setf transx (+ transx (meters 4)))

However if in C one wished to add some new named state to each instance of a particular object one would have to create new structure records, accessors, initializers, memory management etc. GOOL on the other hand is able to easily allocate these on the object’s local stack with just one line of code, preserving the data there from frame to frame as well. A standard programming language like C only has one thread of control. While this is appropriate for the general case, it is inappropriate for objects, which are actually better expressed as state machines. In addition, it is extremely useful to be able to layer ultra light weight threads of execution, and to offer externally introduced transfers of control (events). While threads typically complicate most applications programs with few benefits, they are essential to the convenient programming of game objects, which often have to do several things at once. For example, an object might want to rotate 180 degrees, scale up toward 50%, and move toward the player character all at once. These actions do not necessarily take the same amount of time, and it is often useful to dynamically exchange and control them. In traditional code this is very awkward.

The basic unit of code in GOOL is a code block (or thread). These often do simple things as above. An arbitrary number of these may be combined into a state, they may be borrowed from other states, and activated and deactivated on the fly. For example:

(defgstate turn scale and move toward
	:trans	(defgcode (:label turn 180)
		; set the y rotation 10 degrees closer to 180 degrees
			(setf roty (degseek roty (deg 180) (deg 10))))
	:trans	(defgcode (:label scale to 150 percent)
		; set the x,y, and z scales 10% closer to 150% scale
			(with vec scale
				(seekf scale (scale 1.5) (scale .1))))
	:trans	(defgcode (:label move toward target)
		; set the x,y, and z position closer to the target's
		; (another object) position at a rate of 5 meters per second
			(with vec trans
				(seekf trans (target trans) (velocity (meters per sec 5)))))
	:code	(defgcode (:label play animation)
		; play the animation until this object is colliding with
		; another, then change states
			(until (status colliding)
				(play frame group animation))
				(goto collided)))

A :trans block is one which runs continuously (once per frame), and a :code block is one which has a normal program counter, running until suspended by a special primitive (frame), as in “frame is over.” These code blocks can be run as threads (as above), called as procedures, converted to lambda’s and passed to something (function pointers), and assigned to be run under special conditions (events or state exit). In this example is also illustrated the kind of simple symbolic notation used in GOOL to make object programming easier. Vectors like rotation, translation, and scale are bound to simple symbolic names (e.g. roty is the y component of the rotation vector). Many new arithmetic operations have been defined for common operations, for example, seek, which moves a number toward another number by some increment, and seekf its destructive counterpart.

GOOL also has a sophisticated event system.  It is possible to send an event (with parameters) to another object or objects. The object may then choose to do what it wishes with that event, run some code, change state, ignore it, etc., and report something back to the caller. These event handlers can be bound and unbound dynamically, allowing the object to change its behavior to basic events very flexibly.  For example:

:event	(defgcode (:params (event params))
		(reject-event-and-return
			((and (event is hit on the head)
				(< (interrupter transy) transy)))))

Says to ignore the hit on the head event when the interrupter (sender) is below the receiver in the y dimension.

Another feature illustrated here is the indirect addressing mode, (interrupter transy), in which a variable of another object (whose pointer is in the variable interrupter) is accessed. Operations can locate and return object pointers, which can be used as parameters. For example:

(send event hit on the head (find the nearest object))

which sends hit on the head to the nearest object or:

(let ((target (find the nearest object)))
	(when (and target (type target turtle))
		(send event hit on the head)))

which sends hit on the head to the nearest object only if it is a turtle.

It is the GOOL compiler’s responsibility to turn this state into code that executes the abstraction (the above state becomes about 25 words of R3000 assembly code).  GOOL code is typically much smaller than traditional code for similar tasks because the compiler does all the book keeping for this interleaving, and it is all implicit in the runtime product.  In addition it has a degree of code reuse which is practically unachievable in a normal language without extremely illegible source.

GOOL has full power macros which allow the language to be brought up to the level of the problem.  For example, many game programming tasks are more descriptive than a language like C is designed for. The following code causes a paragraph of text to appear on the bottom of the screen and scroll off the top.

(credit list (1 14)
	("AFTER THE")
	("DISAPPEARANCE")
	("OF HIS MENTOR,")
	("DR. NITRUS BRIO")
	("REDISCOVERED HIS")
	("FIRST LOVE:")
	(blank 1)
	("TENDING")
	("BAR"))

It does this by expanding into a program which creates a bunch of scrolling text objects as follows:

(defgopm credit list (params &rest body)
	"This macro iterates through the clauses in its
	body and transforms them into spawn credit line
	statements which create new credit line objects.
	It book keeps the y position down ward by height
	each time."
   (let ((list)
        (y 0)
        (font (first params))
        (height (second params)))
     (dolist (i body)
        (cond
          ((listp i)
           (case
             (car i)
             (blank (incf y (second i)))
             (t (push (append '(spawn credit line)
                               (:y ,y :font ,font :h ,height))
                      list)
                (incf y 1))))))
     `(progn ,@(reverse list))))(defgopm spawn credit line (line &key (y 0) (font 0) (h 18))
  "This macro is purely syntactic sugar, making the above macro somewhat easier."
  (spawn 1 credit line
     (frame num ,line)
     (unit ,(* y h)) ,font))

The following state is the code for the actual credit line.  When one of these credit line objects is spawned it creates a line of text. It then proceeds to use it’s trans to crawl upward from the starting y position until it is off the screen, in which case it kills itself.

(defgstate credit line (stance)
  :handles (spawn credit line)
  :trans (defgcode ()
           (unless first frame
              (setf transy ((parent transy) transvy))
              (when (> transy (unit 140))
              (goto die fast))))
  :code  (defgcode (:params (text frame y font))
           (stomp action screen relative)
           (set frame group text)
           (setf transvy y)
           (setf transy ((parent transy) transvy))
           (sleep text frame)))

As a conglomerate the above code manages to create a scrolling paragraph of arbitrary length from a descriptive block of code.  It does this by using the macro to transform the description into a program to create a cluster of new line objects.  This line objects take their simple behavior and amplify it into a more substantial effect when they are created in concert. In a conventional language it would be typical to create some kind of data structure to describe different actions, and then interpret that. C in particular is a very poor language for description. Because C’s only complex data type, the structure, can not even be declared in line (e.g. “struct foo bar={1,0}” is not legal except as a global) it is extremely awkward to describe complex things. It must be done with code, and the poor textual macro expander is not up to this. Witness the wretchedness of descriptive APIs like that of X windows. The contortions necessary to describe widget creation are unbelievable. Is it no wonder that people prefer to do interface work with resource files or Tcl/Tk which are both more descriptive in nature?

Overall, having a custom language whose primitives and constructs both lend themselves to the general task (object programming), and are customizable to the specific task (a particular object) makes it much easier to write clean descriptive code very quickly. GOOL makes it possible to prototype a new creature or object in as little as 10 minutes. New things can be tried and quickly elaborated or discarded. If the object doesn’t work out it can be pulled from the game in seconds without leaving any hard to find and wasteful traces behind in the source. In addition, since GOOL is a compiled language produced by an advanced register coloring compiler with reductions, flow analysis, and simple continuations it is at least as efficient as C, more so in many cases because of its more specific knowledge of the task at hand.  The use of a custom compiler allows to escape many of the classic problems of C.


A new 10th Crash post can be found HERE.

Subscribe to the blog (on the right), or follow me at:

Andy:  or blog

Also, peek at my novel in progress: The Darkening Dream

or more posts on LISP,

GAMES or BOOKS/MOVIES/TV or WRITING or FOOD.

Crash Bandicoot – An Outsider’s Perspective (part 8)

This is part of a now lengthy series of posts on the making of Crash Bandicoot. Click here for the PREVIOUS or for the FIRST POST .

After Naughty Dog Jason and I joined forces with another game industry veteran, Jason Kay (collectively Jason R & K are known as “the Jasons”). He was at Activision at the time of the Crash launch and offers his outside perspective.

Although I would not meet Andy and Jason until after Crash 3 was released, the time around the launch of Crash Bandicoot was a fascinating time in the game business, and I believe that the launch of Crash, which was so far ahead of every other game of its generation in every aspect – technical achievement, production values, sound/music, design and balancing – caused everyone I knew in the business to rethink the games they were working on.

Warhawk: One of the best looking early PS1 games

It seems hard to imagine given the broad scope of games today — Console Games costing $50+ million, Social Games on Facebook with 100 Million monthly average users, gesture controlled games, $.99 games on iPhone – how troubled the industry was before the release of Crash, which heralded the rebirth of console games after a dormant period and ushered in the era of the mega-blockbuster game we know today. In the year that Crash Bandicoott released, only 74 Million games were sold across all platforms in the US – of which Crash accounted for nearly 5% of all games sold in the US. By 2010 – more than 200 Million games were sold, with the number one title, Call of Duty: Black Ops selling “only” 12 million copies in the US – about 6% of the total market. In some ways, adjusted for scale, Crash was as big then as Call of Duty is today.

Twisted Metal - Another of the better early PS1 games

After the incredible success of Super Mario World and Sonic the Hedgehog, the game business was really in the doldrums and it had a been a boatload of fail for the so-called “rebirth of the console”. Sega had released a series of “not-quite-next-gen” peripherals for the incumbent Sega Genesis system (including the 32x and the truly awful Sega CD), and made vague promises about “forward compatibility” with their still-secret 32 bit 3D Saturn console. When the Saturn finally shipped, it was referred to by many people as “Two lies in One”, since it was neither compatible with any previous Sega hardware, and nor was it capable of doing much E3. Sega further compounded their previous two mistakes by giving the console exclusively to then-dominant retailer Toys “R” US, pissing of the rest of the retail community and pretty much assuring that console, and eventually Sega’s, demise in the hardware business.

Wipeout - at the time it looked (and sounded) good

The PlayStation had shipped in Fall of 1995, but the initial onslaught of games all looked vaguely similar to Wipeout – since no one believed that it was possible to stream data directly from the PS1 CD-Drive, games were laboriously unpacking single levels into the PS1’s paltry 2 MB of ram (+ 1 meg vram and 0.5 meg sound ram), and then playing regular CD (“redbook”) audio back in a loop while the level played. So most games (including the games we had in development at Activision and were evaluating from third parties) all looked and played in a somewhat uninspiring fashion.

When Crash first released, I was a producer at then-upstart publisher Activision – now one of the incumbent powerhouses in the game business that everyone loves to hate – but at that time, Activision was a tiny company that had recently avoided imminent demise with the success of MechWarrior 2, which was enjoying some success as one of the first true-3D based simulations for the hardcore PC game market. To put in perspective how small Activision was at that time, full year revenues were $86.6 Million in 1996, versus over $4.45 Billion in 2010, a jump of nearly 50x.

MechWarrior 2: 31st Century Combat DOS Front Cover

Jeffrey Zwelling, a friend of a friend who had started in the game business around the same I did, worked at Crystal Dynamics as a producer on Gex. Jeffrey was the first person I knew to hear about Crash, and he tipped me off that something big was afoot right before E3 in 1996. Jeff was based in Silicon Valley, and a lot of the former Naughty Dogs (and also Mark Cerny) had formerly worked at Crystal, so his intel was excellent. He kept warning me ominously that “something big” was coming, and while he didn’t know exactly “what” it was, but it was being referred to by people who’d seen as a “Sonic Killer”, “Sony’s Mario”, and “the next mascot game”.

As soon as people got a glimpse of the game at E3 1996, the conspiracy mongering began and the volume on the Fear, Uncertainty and Doubt meter went to 11. In the pre-Internet absence of meaningful information stood a huge host of wild rumors and speculation. People “in the know” theorized that Naughty Dog had access to secret PlayStation specifications/registers/technical manuals that were only printed in Japanese and resided inside some sort of locked vault at Sony Computer Entertainment Japan. Numerous devs declared the Naughty Dog demo was “faked” in some way, running on a high-powered SGI Workstation hidden behind the curtain at Sony’s booth. That rumor seems in hindsight to have been a conflation of the fact that that the Nintendo 64 console, Code-Named “Project Reality” was in fact very similar to a Silicon Graphics Indigo Workstation and the Crash team was in fact writing and designing the game on Silicon Graphics workstations.

Tomb Raider - Crash contemporary, and great game. But the graphics...

Everyone in the business knew how “Sega had done what NintenDONT” and that they had trounced Nintendo with M-Rated games and better tiles in the 16 bit Era, and most of the bets were that Nintendo was going to come roaring back to the #1 spot with the N64. Fortunately for Nintendo, Sega’s hardware was underpowered and underwhelming and Nintendo’s N64 shipped a year later than the Playstation 1. With all the focus on many people’s attention on this looming battle, and the dismissive claims that what Naughty Dog was showing was “impossible”, most people underestimated both the PlayStation and Naughty Dog’s Crash Bandicoot.

Since no one that I knew had actually gotten a chance to play Crash at the show – the crowds were packed around the game – I fully expected that my unboxing of Crash 1 would be highly anti-climatic. I remember that Mitch Lasky (my then boss, later founder of Jamdat and now a partner at Benchmark) and I had made our regular lunch ritual of visiting Electronics Boutique [ ANDY NOTE: at Naughty Dog this was affectionately known as Electronic Buttock ] (now GameStop) at the Westside Pavilion and picked up a copy of the game. We took the game back to our PS1 in the 7th Floor Conference Room at Activision, pressed start, and the rest was history. As the camera focused on Crash’s shoes, panned up as he warped in, I literally just about sh*t a brick. Most of the programmers we had talked to who were pitching games to us claimed that it was “impossible” to get more than 300-600 polygons on screen and maintain even a decent framerate. Most of the games of that era, a la Quake, had used a highly compressed color palette (primarily brown/gray in the case of Quake) to keep the total texture memory low. It seemed like every game was going to have blocky, ugly characters and a lot of muted colors, and most of the games released on the PS1 would in fact meet those criteria.

Mario 64 - Bright, pretty, 3D, not so detailed, but the only real contender -- but on a different machine

Yet in front of us, Andy and Jason and the rest of the Crash team showed us that when you eliminate the impossible, only the improbable remains. Right before my eyes was a beautiful, colorful world with what seemed like thousands of polys (Andy later told that Crash 1 did in fact have over 1800 polygons per frame, and Crash 2 cracked 3,100 polys per frame – a far cry from what we had been told was “a faked demo” by numerous other PS1 development teams). The music was playful, curious and fun. The sound effects were luscious and the overall game experience felt, for the first time ever, like being a character in a classic Warner Brothers cartoon. Although I didn’t understand how the Dynamic Difficulty Adjustment (discussed in part 6) actually worked, I was truly amazed that it was the first game everyone I knew who played games loved to play. There was none of the frustration of being stuck on one spot for days, no simply turning the game off never to play it again – everyone who played it seemed to play it from start to finish.

For us, it meant that we immediately raised our standards on things we were looking at. Games that had seemed really well done as prototypes a few weeks before now seemed ungainly, ugly, and crude. Crash made everyone in the game business “up their game.” And game players of the world were better off for it.

 

These posts continue with PART 9 HERE. You also never know when we might add more, so subscribe to the blog (on the right), or follow us at:

Andy:  or blog

Jason:  or blog

Also, if you liked this, vote it up at at Reddit or Hacker News, and peek at my novel in progress: The Darkening Dream

or more posts on

GAMES or BOOKS/MOVIES/TV or WRITING or FOOD.

Detailed and Colorful - but most important fun

Certainly varied

Sorry for the lousy screen shots!

Crash Bandicoot as a Startup (part 7)

This is part of a now lengthy series of posts on the making of Crash Bandicoot. Click here for the PREVIOUS or for the FIRST POST .

Dave Baggett, Naughty Dog employee #1 (after Jason and I) throws his own thoughts on Crash Bandicoot into the ring:

This is a great telling of the Crash story, and brings back a lot of memories. Andy and Jason only touch on what is to me the most interesting aspect of this story, which was their own relationship. When I met them, they had been making games together — and selling them — literally since middle school. I remember meeting Andy for the first time in April 1992, at an MIT AI Lab orientation. He knew as much as I did about games and programming, was as passionate about it as I was, and was equally commercially-minded. I just assumed meeting someone like this was a consequence of the selectivity of MIT generally and the AI Lab in particular, which accepts about 25 students each year from a zillion applicants.

In the long run I found that assumption was wrong: Andy and Jason were ultimately unique in my experience. None of us on the Crash 1 team realized it, but as a team we were very much outliers. At 23, Andy and Jason had commercial, strategic-thinking, and negotiating skills that far exceeded those of most senior executives with decades of experience. These, combined with their own prodigious technical talents and skillful but at times happenstance hiring, produced a team that not only could compete with Miyamoto, but in some ways outdo him. (More on this in a moment.)

I still remember the moment I decided to bail on my Ph.D. and work for Andy and Jason as “employee #1”. I don’t think they saw themselves this way, but my archetype for them was John and Paul. (The Beatles, not the saints!) They were this crazy six-sigma-outlier yin/yang pair that had been grinding it out for literally years — even though they were still barely in their 20s. I knew these guys would change the world, and I wanted to be the George Harrison. One problem with this idea, however, was that they had been gigging together for so long that the idea of involving someone else in a really deep way — not just as an employee,but as a partner — was extremely challenging for them emotionally, and, I think, hard for them to conceptualize rationally from a business standpoint. This ultimately led to my leaving after Crash 2 — very sadly, but mostly for dispassionate “opportunity cost” reasons — though I continued to work with Josh Mancell on the music for Crash 3 and CrashTeam Racing, and remained close friends with all the ‘Dogs.

Andy and Jason had evolved a peculiar working relationship that the rest of the team found highly amusing. Jason would stomp around raging about this or that being terrible and Andy would play the role of Star Trek’s Scotty — everything was totally impossible and Jason couldn’t possibly appreciate the immense challenges imposed by what he was really asking for.  (As a programmer myself, I generally took Andy’s side in these debates, though I usually hid in my office when the yelling got above a certain decibel level.) Eventually when matters were settled Andy usually pounded out the result in a 1/10th of the advertised time (also like Scotty). The rest of us couldn’t help but laugh at these confrontations — at times, Andy and Jason behaved like an old married couple. The very long work hours — literally 100-hour weeks — and the stress level definitely amplified everyone’s emotions, especially theirs.

Andy and Japanese Crash in the NDI offices

On the subject of Mario 64, I agree more with Andy than with Jason, and think that Jason’s view highlights something very interesting and powerful about his personality. At the time I thought — and in retrospect, I still think — that Mario 64 was clumsy and ugly. It was the work of a great genius very much making a transition into a new medium — like a painter’s first work in clay. Going from 2D to 3D made all the technical challenges of games harder — for both conceptual and algorithmic reasons — and Miyamoto had just as hard a time as us adapting traditional gameplay to this new framework. The difference was that Miyamoto was an artist, and refused to compromise. He was willing and able to make a game that was less “fun” but more aggressively novel. As a result, he gave gamers their first taste of glorious 3D open vistas — and that was intoxicating. But the truth is that Mario 64 just wasn’t that fun; Miyamoto’s 2D efforts at the time — Donkey Kong Country and Yoshi’s Island — were far more fun (and, in fact, some of my personal favorite games of all time, though I never would have admitted that out loud at the time). As Andy said, the camera algorithms were awful; we had an incredibly hard time with camera control in our more constrained rails environment, and the problem wasn’t really technically solved for open environments like Mario 64’s until many years later. Mario 64’s collision detection algorithms were crap as well — collision detection suffers from a “curse of dimensionality” that makes it much harder in 3D than in 2D, as we also found. At Naughty Dog, we combined my ridiculously ambitious octree approach — essentially, dividing the entire world up into variable-sized cubes — with Mark’s godlike assembly coding to produce something *barely* fast enough to work — and it took 9 months. This was the one the one area on Crash when I thought we might actually just fail — and without Mark and I turning it into a back-and-forth coding throw-down, we probably would have. (As an aside, some coders have a savant-like ability to map algorithms onto the weird opportunities and constraints imposed by a CPU; only Greg Omi — who worked with us on Crash 2 — was in the same league as Mark when it came to this, of the hundreds of programmers I’ve worked with.)

But Jason was tormented by Mario 64, and by the towering figure of Miyamoto generally. Like Andy Grove, Jason was constantly paranoid and worked up about the competition. He consistently underrated his — and our — own efforts, and almost neurotically overrated those of his competitors. I saw this trait later in several other great business people I worked with, and it is one I’ve found that, while maddening, correlates with success.

Fifteen years later, I’m now on my third startup; ITA Software followed Naughty Dog, and now I’m doing a raw startup again. The Naughty Dog model set the mold for all my future thinking about startups, and so far each one has followed a similar pattern: you must have a very cohesive, hard-working, creative team early on. This team of 6-12 sets the pattern for the company’s entire future — whether it grows to 50, 500, or — I can only assume — 5000 employees. The Crash 1 team was one of those improbable assemblages of talent that can never quite be reproduced. And unlike our contemporaries, our team got lucky: as Andy said, we were able to “slot in” to a very low-probability opportunity. Yes, Andy and Jason, with Mark, had identified the slot, and that was prescient. But many things had to go our way for the slot to still be genuinely available. The Crash team was an improbably talented team that exploited an improbable opportunity. As a life-long entrepreneur, I’ve lived to participate in — and, now, try to create — teams like that. There’s nothing more gratifying in business.

 

Part 8 CONTINUES here with another guest post and subscribe to the blog (on the right), or follow us at:

Andy:  or blog

Jason:  or blog

Also, if you liked this, vote it up at at Reddit or Hacker News, and peek at my novel in progress: The Darkening Dream

or more posts on

GAMES or BOOKS/MOVIES/TV or WRITING or FOOD.

Making Crash Bandicoot – part 6

PREVIOUS installment, or the FIRST POST.

[ NOTE, Jason Rubin added his thoughts to all the parts now, so if you missed that, back up and read the second half of each. ]

 

Not only did we need to finish our E3 demo, but we needed a real name for the game — Willie the Wombat wasn’t going to cut it. Now, in the Naughty Dog office proper we knew he was a Bandicoot. In fact, we liked the idea of using an action name for him, like Crash, Dash, Smash, and Bash — fallout from the visceral reaction to smashing so many boxes.

Dr N. Cortex goes medieval on Universal Marketing

But the Universal Marking department (of one) thought differently. They had hired one of those useless old-school toy marketing people, a frumpy fortyish woman about as divorced from our target audience – and the playing of video games – as possible. This seems to be a frequent problem with bigger companies, the mistaken idea that you can market an entertainment product if you aren’t also an enthusiastic customer of said product. On the other hand, everyone making the game played constantly. We had regular Bomberman tournaments, we could all debate the merits of control in Sonic vs Mario, and Dave was even a former Q*Bert world champion.

In any case, this obstacle (the marketing woman) wanted to call the game “Wuzzle the Wombat,” or “Ozzie the Otzel.” Fortunately, after much yelling we prevailed and Crash Bandicoot became… Crash Bandicoot.

Crash's hot girlfriend, Tawna

It’s also worth mentioning that she objected to Crash’s rather busty girlfriend (or Bandicoot-friend) on basic sexist principles. Now, Tawna wasn’t the most inspired of our character designs, more or less being Jessica Rabbit as a Bandicoot, and without the cool personality. But remember who generally played games like Crash. The same kind of guys we had been 5-10 years earlier.

The music also had to be cobbled together before E3 – and in classic video game development fashion had been left to the last minute. This task had been assigned to our nominal producer at Universal, a gentleman who mostly sat in his office and played Sexy Parodius. While of dubious benefit to the project, at least he loved video games. However, he proposed that instead of conventional music we create something called “the urban chaotic symphony” in which the programmer (me) would cause random sound effects such as bird chirps, car honks, grunts, and farting noises (actually listed and underlined), to be randomly selected and combined. When we rejected this innovative proposal, we were introduced to Mark Mothersbaugh of Devo and more recently Mutato Muzika. He and (mostly) Josh Mancell composed all the music for the games, produced by music aficionado and Naughty Dog programmer Dave Baggett.  Besides, Dave actually knew the game inside and out.

Finally we arrive at E3, and the debut of the N64 and Mario 64. Gulp.

Jason (right) and I (left) at E3 1996

Mario was a bit of a two edged sword for us. First of all, the attention it garnered helped force us into the limelight. Sega was engaged in the slow process of killing themselves with bad decisions and bad products, and so Sony and Nintendo found themselves head to head. This literally put Crash and Mario into the ring together. In fact, this was depicted on the cover of at least one game magazine (along with Sonic who declined to enter the ring).

In any case, since Crash released about a month after Mario the press often assumed that we had copied various elements, which always bugged us to no end, as both games were developed with no real knowledge of each other. Crash was nearly beta by the time we saw Mario at E3, and gold mastered by the time the N64 shipped and we could play it. Both games took very different approaches to the then unproven 3D CAG genre.

With Crash we decided to emphasize detailed cartoon visuals and classic Donkey Kong Style gameplay. So we used a camera on rails (albeit branching rails).

With the N64’s VERY limited texture system and poly count, but with its smoothing and z-buffer, Mario chose to go with a very loosely defined polygonal free roaming world and a much more playground style of gameplay.

Mark & I watch Miyamoto play Crash

Personally when I first got my hands on Mario I was like WTF? How is anyone going to know what to do here? And although there was a pretty real sense of marvel in this funny new world, I never found it very fun. The early camera AI was brutally frustrating. And the Mario voiceover. I still cringe, “It’sa me, Mario!” Still the game was brilliantly innovative, although I remain convinced that if anyone but Miyamoto had made the game it would have flopped.

Really, the future lay in the hybrid of the two.

Critics loved Mario. Perhaps because many of them were Nintendo fan boys, perhaps because it was more innovative (and it was). But the players loved both, because they sure bought a LOT of Crash Bandicoots too, approximately 35-40 million of our four PS1 games.

In a lot of ways Crash was the last of the great video game mascot characters, despite the fact that Sony never really wanted a mascot. We set out to fill this void, and made a game to do it, but we never really expected – only hoped – that it would happen. By the era of PS2 and X-box, the youthful generation of video game players had grown up, and the platforms began to appeal to a much wider age range. With this, and increased graphics horsepower that made possible more realistic games, came a shift to more mature subjects. The era of GTA, of Modern Warfare and Halo. Sophisticated and dark games mirroring R-rated action movies.

A part of me misses the simple, but highly crafted comic fun Crash represented.

 

Jason says:

There were so many great stories from Crash Development.  I’m sad that this is the last of 6 blog posts.  There is so much that has been missed.

One of my favorite memories relates to the collision detection.  Crash had more detailed environments than most games had attempted at that point, and there was no known solution for such complex collision detection in games.  Even after Crash came out, most developers just let their characters wade through most objects, and stuck to simple flat surfaces, but we wanted the character and the world to interact in a much more detailed fashion.

Andy and Dave called one of their friends at the Media Lab at MIT.  Basically, the Media lab worked on state of the art visual and computing problems.  They were, and still are, some of the most advanced in the world.  They asked their friend what high detail collision detection solutions were kicking around at that time.

The next day the friend called back and said he had the perfect solution.  Unfortunately, it demanded a Cray Supercomputer and hundreds (thousands?) of PlayStations worth of memory to work in real-time.

Andy and Dave hung up and started to come up with something on their own.

Naming Crash was one of the hardest things I have ever had to take part in.  It became so confused, so frustrating, so combative, and so tiring that I remember starting to think that Willie Wombat sounded good!

Credit goes to Taylor and Dave for combining Crash and Bandicoot for the first time.

If Andy and I deserve credit for anything name related, it is for viciously defending our character from the ravages of the Universal Marketing Death Squad.  I remember the name mooted by Universal to be Wez or Wezzy Wombat, but as I said things were very confused, and frankly it doesn’t matter what the alternate name was.

When Universal stated that as producer and they were going to pick the name, Andy and I walked the entire team (all 7 of us!) into the head of Universal Interactive’s office and said, “either we go with ‘Crash Bandicoot’, or you can name the game whatever you want and finish the development yourself.”

I think the result is obvious.

This was not the only time this tactic had to be used with Universal.  With all the “everyone grab your stuff and head to the office at the other end of the hall” moments, I don’t know how we even finished the game.

But we didn’t win every battle.  Crash’s girlfriend Tawna ended up on the chopping block after Crash 1.  We tried to choose our battles wisely.  Unlike the name “Crash Bandicoot”, Tawna wasn’t worth fighting for.

There was so much negativity and dispute with Universal Interactive that it is a miracle it didn’t scupper the game.

For example, Naughty Dog was told that it wasn’t “allowed” to go to the first E3.  This was part of a continuing attempt by Universal Interactive to take credit for the product.  It might have worked if Universal were parents and Naughty Dog was their six year old child, but we were an independent company working under contract.  Nobody was going to tell us what we could or couldn’t do.

There were also some leaked copies of the temporary box cover and press materials for E3, upon which Naughty Dog’s logo had “mysteriously,” and in direct conflict with the letter and spirit of our contract, been forgotten.

My response to both was to draft and print 1000 copies of a glitzy document entitled “Naughty Dog, creator and developer of Crash Bandicoot” ostensibly to hand out in front of the Crash display at E3.  As a “courtesy” I to passed these flyers out “for review” to Universal Interactive beforehand.

The head of Universal Interactive came as close to literally flipping his lid as a person can come.  He stormed into Andy’s office, made some extremely threatening comments, and then promptly went off to a shooting range in order to produce a bullet-riddled target to hang on his office door.

Things did get heated from time to time.

And just for the record, kudos to Mark for surviving all the hassle.  He was an employee of Universal Interactive yet completely uninvolved in any chicanery.  And as I’ve said before, he was always the Nth Dog, so times like these were harder on him than anyone else.

But all this is keeping us from discussing E3!

Ah the big show…

Sony booted one of its own internal products to give Crash the prime spot on the floor.  Walking in and seeing dozens of monitors playing the game was a moment I will never forget.

But I don’t think Andy and I had spent more than a moment looking at our triumph before we went off to fight the hoards at the Mario 64 consoles over at Nintendo’s booth for a chance on the controller.  As amazing as it was seeing our wall of monitors, seeing the lines for Mario made my heart drop.  Could it be that good?

Unlike Andy, I actually think Mario 64 WAS that good.

Mario 64 was a better game than the first Crash Bandicoot.

Miyamoto-san was at the top of his game and we were just getting started.  Crash was our first platformer, remember, and thus it lacked many of the gameplay nuances that Mario had.

Mario 64’s controls and balance were just better.

And then there was our annoying way of making players earn continues.  This was a major mistake.  It makes players that need lives fail while boring players that don’t.  It is the opposite of good game balance.

We were already learning.  We had realized that if a novice player died a lot of times, we could give them an Aku Aku at the start of a round and they had a better chance to progress.  And we figured out that if you died a lot when running from the boulder, we could just slow the boulder down a little each time.  If you died too much a fruit crate would suddenly become a continue point.  Eventually everyone succeeded at Crash.

Our mantra became help weaker players without changing the game for the better players.

We called all this DDA, Dynamic Difficulty Adjustment, and at the time the extent to which we did it was pretty novel.  It would lead later Crash games to be the inclusive, perfectly balanced games they became.   Good player, bad player, everyone loved Crash games.  They never realized it is because they were all playing a slightly different game, balanced for their specific needs.

But for all of our triumphant balancing attempts, we still made many mistakes in the first title.

Miyamoto-san didn’t make these mistakes.  3D Gameplay choice and art aside, Mario 64 was a better game.

And that isn’t to say that we didn’t have some serious advantages of our own.

For example, Crash looked better.  I am sure there will be disagreement with this statement.  But when 100 people were lined up and asked which looked more “next generation” (a term like ‘tomorrow’ that is always just over the horizon), most people pointed to Crash.

If I had to guess what Miyamoto-san was thinking when he was playing Crash in the photo above it was probably “damn this game looks good.”

Of course he had consciously made the decision to forgo the complex worlds Crash contained.  The N64 had prettier polygons, but less of them to offer.  Crash Bandicoot could not be made on the N64.  Of course Mario 64 couldn’t be done on the PlayStation either.  The PlayStation sucked at big polygons, specifically scissoring them without warping textures.  Mario 64 relied on big polygons.

But more fundamentally, the open world he chose would tax ANY system out at that time.   Mario 64 couldn’t be open and any more detailed than it was.  Miyamoto-san had chosen open and that meant simple.

Spyro later split the difference with walled open worlds, but at E3 1996 there was only the choice between the complex visuals of Crash, or the crayon simple expansive simplicity of Mario 64.

Yes, Crash was a throwback to old games and on “rails”.  But Mario 64 just didn’t look (as much) like a Pixar movie.  That created space for an argument, and thus one of the great wars between games, and by proxy consoles, could be fought.

I believe, right or wrong, that Crash won that comparison when it got to the shelves.

And this was just the beginning.

Unlike Miyamoto-san, Naughty Dog was willing to forgo the light of day to bring out a sequel to Crash Bandicoot one year later in September 1997.  By comparison, there wouldn’t be another Mario platformer until “Mario Sunshine” in 2002.

We took what we had learned from Crash 1, and from Mario 64 for that matter, and went back to the drawing board.   Crash 2 was re-built from the ground up.  Everything was improved.  But most importantly we focused on the gameplay.

Crash Bandicoot had taxed us to our limits.  Much of that time had been spent figuring out what the game would be, and then getting it working.

The second game could be built on the platform and successes of the first, but also from its mistakes.  The same would eventually true of Jak and Daxter, and, though I had no hand it the games, is probably true of Uncharted.  While Andy and I led Naughty Dog it had, and seems from outside to still have, a relentless pursuit of improvement.  That has meant historically that the second game in a series tends to be a better game.

Crash 2 would be a MUCH better game than Crash 1.  I would even argue that Crash 2 would end up being as good, if not better, than Mario 64.

But that is as story for another day.

 

This (sort of) continues with a virtual part 7 by Dave Baggett with his thoughts on Crash.

Also, subscribe to the blog (on the right), or follow us at:

Andy: or blog

Jason: or blog

Also, if you liked this, vote it up at at Reddit or Hacker News, and peek at my novel in progress: The Darkening Dream

or more posts on

GAMES or BOOKS/MOVIES/TV or WRITING or FOOD.

The Limited Edition Launch Poster

Making Crash Bandicoot – part 5

PREVIOUS installment, or the FIRST POST.

[ NOTE, Jason Rubin added his thoughts to all the parts now, so if you missed that, back up and read the second half of each. ]

 

A Bandicoot, his beach, and his crates

But even once the core gameplay worked, these cool levels were missing something. We’d spent so many polygons on our detailed backgrounds and “realistic” cartoon characters that the enemies weren’t that dense, so everything felt a bit empty.

We’d created the wumpa fruit pickup (carefully rendered in 3D into a series of textures — burning a big chunk of our vram — but allowing us to have lots of them on screen), and they were okay, but not super exciting.

Enter the crates. One Saturday, January 1996, while Jason and I were driving to work (we worked 7 days a week, from approximately 10am to 4am – no one said video game making was easy). We knew we needed something else, and we knew it had to be low polygon, and ideally, multiple types of them could be combined to interesting effect. We’d been thinking about the objects in various puzzle games.

So crates. How much lower poly could you get? Crates could hold stuff. They could explode, they could bounce or drop, they could stack, they could be used as switches to trigger other things. Perfect.

So that Saturday we scrapped whatever else we had planned to do and I coded the crates while Jason modeled a few, an explosion, and drew some quick textures.

About six hours later we had the basic palate of Crash 1 crates going. Normal, life crate, random crate, continue crate, bouncy crate, TNT crate, invisible crate, switch crate. The stacking logic that let them fall down on each other, or even bounce on each other. They were awesome. And smashing them was so much fun.

Over the next few days we threw crates into the levels with abandon, and formally dull spots with nothing to do became great fun. Plus, in typical game fashion tempting crates could be combined with in game menaces for added gameplay advantage. We even used them as the basis for our bonus levels (HERE in video). We also kept working on the feel and effects of crate smashing and pickup collection. I coded them again and again, going for a pinball machine like ringing up of the score. One of the best things about the crates is that you could smash a bunch, slurp up the contents, and 5-10 seconds later the wumpa and one-ups would still be ringing out.

This was all sold by the sound effects, executed by Mike Gollom for Crash 1-3. He managed to dig up the zaniest and best sounds. The wumpa slurp and the cha-ching of the one up are priceless. As one of our Crash 2 programmers used to say, “the sounds make the game look better.”

For some reason, years later, when we got around to Jak & Daxter we dropped the crate concept as “childish,” while our friends and amiable competitors at Insomniac Games borrowed them over into Ratchet & Clank. They remained a great source of cheap fun, and I scratch my head at the decision to move on.

Now, winter 95-96 the game was looking very cool, albeit very much a work-in-progress. The combination of our pre-calculation, high resolution, high poly-count, and 30 fps animation gave it a completely unique look on the machine. So much so that many viewers thought it a trick. But we had kept the whole project pretty under wraps. One of the dirty secrets of the Sony “developer contract” was that unlike its more common “publisher” cousin, it didn’t require presentation to Sony during development, as they assumed we’d eventually have to get a publisher. Around Thanksgiving 1995, I and one of our artists, Taylor Kurosaki, who had a TV editing background, took footage from the game and spent two days editing it into a 2 minute “preview tape.” We deliberately leaked this to a friend at Sony so that the brass would see it.

They liked what they saw.

Management shakeups at Sony slowed the process, but by March of 1996 Sony and Universal had struck a deal for Sony to do the publishing. While Sony never officially declared us their mascot, in all practical senses we became one. Heading into the 1996 E3 (May/June) we at Naughty Dog were working ourselves into oblivion to get the whole game presentable. Rumors going into E3 spoke of Nintendo’s new machine, the misleadingly named N64 (it’s really 32 bit) and Miyamoto’s terrifying competitive shadow, Mario 64.

Crash and his girl make a getaway

For two years we had been carefully studying every 3D character game. Hell, we’d been pouring over even the slightest rumor – hotly debated at the 3am deli takeout diners. Fortunately for us, they’d all sucked. Really sucked. Does anyone remember Floating Runner? But Mario, that wasn’t going to suck. However, before E3 1996 all we saw were a couple of screen shots – and that only a few weeks before. Crash was pretty much done. Well, at least we thought so.

Now, we had seen some juicy magazine articles on Tomb Raider, but we really didn’t worry much about that because it was such a different kind of game: a Raiders of the Lost Ark type adventure game starring a chick with guns. Cool, but different. We’d made a cartoon action CAG aimed at the huge “everybody including kids” market.

Mario  was our competition.

 

Jason says:

The empty space had plagued us for a long time.  We couldn’t have too many enemies on screen at the same time.  Even though the skunks or turtles were only 50-100 polygons each, we could show two or three at most.  The rest was spent on Crash and the Background.  Two or three skunks was fine for a challenge, but it meant the next challenge either had to be part of the background, like a pit, or far away.  If two skunk challenges came back to back there was a huge amount of boring ground to cover between them.

Enter the crates.   The Crates weren’t put in to Crash until just before Alpha, or the first “fully playable” version of the game.

Andy must have programmed the “Dynamite Crate/Crate/Dynamite Crate” puzzle 1000 times to get it right.  It is just hard enough to spin the middle crate out without blowing up the other two, but not hard enough not to make it worth trying for a few wumpa fruit.  Getting someone to risk a Life for 1/20th of a Life is a fine balancing act!

Eventually the Crates led to Crash’s name.  In less than a month after we put them in everyone realized that they were the heart of the game.  Crash’s crash through them not only filled up the empty spots, the challenges ended up filling time between Crate challenges!

This isn’t the place for an in depth retelling of the intrigue behind the Sony/Crash relationship, but two stories must be told.

The first is Sony’s first viewing of Crash in person.  Kelly Flock was the first Sony employee to see Crash live [ Andy NOTE: running, not on videotape ].  He was sent, I think, to see if our videotape was faked!

Kelly is a smart guy, and a good game critic, but he had a lot more to worry about than just gameplay.  For example, whether Crash was physically good for the hardware!

Andy had given Kelly a rough idea of how we were getting so much detail through the system: spooling.  Kelly asked Andy if he understood correctly that any move forward or backward in a level entailed loading in new data, a CD “hit.”  Andy proudly stated that indeed it did.  Kelly asked how many of these CD hits Andy thought a gamer that finished Crash would have.  Andy did some thinking and off the top of his head said “Roughly 120,000.”  Kelly became very silent for a moment and then quietly mumbled “the PlayStation CD drive is ‘rated’ for 70,000.”

Kelly thought some more and said “let’s not mention that to anyone” and went back to get Sony on board with Crash.

The second story that can’t be glossed over was our first meeting with the Sony executives from Japan.  Up until this point, we had only dealt with Sony America, who got Crash’s “vibe”.  But the Japanese were not so sure.

We had been handed a document that compared Crash with Mario and Nights, or at least what was known of the games at the time.  Though Crash was rated favorably in “graphics” and some other categories, two things stood out as weaknesses.  The first was that Sony Japan didn’t like the character much, and the second was a column titled “heritage” that listed Mario and Sonic as “Japanese” and Crash as “other.”  The two negatives were related.

Let us remember that in 1995 there was Japan, and then there was the rest of the world in video games.  Japan dominated the development of the best games and all the hardware.  It is fair to say that absent any other information, the Japanese game WAS probably the better one.

Mark presided over the meeting with the executives.  He not only spoke Japanese, but also was very well respected for his work on Sonic 2 and for his years at Sega in Japan.  I could see from the look in Mark’s eyes that our renderings of Crash, made specifically for the meeting, did not impress them.

We took a break, during which it was clear that Sony was interested in Crash for the US alone, hardly a “mascot” crowning.  I stared at the images we had done.  Primitive by today’s standards, but back then they were reasonably sexy renderings that had been hand retouched by Charlotte for most of the previous 48 hours.  She was fried.

I walked over to her.  I think she could barely hold her eyes open.  I had spent the previous month spending all of my free time (4am-10am) studying Anime and Manga.  I read all the books available at that time in English on the subject.  All three!  I also watched dozens of movies.  I looked at competitive characters in the video game space.  I obsessed, but I obsessed from America.  I had never been to Japan.

I asked Charlotte if she could close Crash’s huge smiling mouth making him seem less aggressive.   I asked her to change Crash’s eyes from green to two small black “pac-man” shapes.  And I asked her to make Crash’s spike smaller.  And I told her she had less than 15 minutes.  With what must have been her last energy she banged it out.

I held up the resulting printout 15 minutes later.

Sony Japan bought off on Crash for the international market.

I don’t want to make the decision on their part seem arbitrary.  Naughty Dog would do a huge amount of work after this on the game for Japan, and even then we would always release a Japanese specific build.  Whether it was giving Aku Aku pop up text instructions, or replace a Crash smashing “death” that reminded them of the severed head and shoes left by a serial killer that was loose in Japan during Crash 2’s release, we focused on Japan and fought hard for acceptance and success.

We relied on our Japanese producers, including Shuhei Yoshida, who was assigned shortly after this meeting, to help us overcome our understandable ignorance of what would work in Japan.  And Sony Japan’s marketing department basically built their own Crash from the ground up for the marketing push.

Maybe Charlotte’s changes showed Sony that there was a glimmer of hope for Crash in Japan.  Maybe they just saw how desperate we were to please and couldn’t say no.  Maybe Universal put something in the coffee they had during the break.

Who knows, but Crash was now a big part of the international PlayStation push.  So there were more important things for us to worry about then Sony and the deal:

The fear of Miyamoto was thick at Naughty Dog during the entire Crash development period.  We knew eventually he would come out with another Mario, but we were hoping, praying even, that it would be a year after we launched.

Unfortunately that was not to be.  We started seeing leaks of video of the game.

It was immediately obvious that it was a different type of game: truly open.  That scared us.  But when we saw the graphics we couldn’t believe it.  I know there will be some that take this as heresy, but when we saw the blocky, simple, open world we breathed a sign of relief.  I think I called it I Robot Mario, evoking the first 3D game.

Of course we hadn’t played it, so we knew we couldn’t pass judgment until we did.  That would happen at E3.


CONTINUED in PART 6 or

more on GAMES or BOOKS/MOVIES/TV or WRITING or FOOD.

The Big Fight!

Making Crash Bandicoot – part 1

Crash Bandicoot cover

In the summer of 1994 Naughty Dog, Inc. was still a two-man company, myself and my longtime partner Jason Rubin. Over the preceding eight years, we had published six games as a lean and mean duo, but the time had come to expand.

In 1993 and 1994 we invested our own money to develop the 3D0 fighting game, Way of the Warrior. In the summer of 1994 we finished it and sold the rights to Universal Studios. At the same time we agreed to a “housekeeping” deal with Universal, which meant moving to LA, and for me bailing out on my M.I.T. PhD halfway. It certainly didn’t turn out to be a bad decision.

Jason and I had been debating our next game for months, but the three-day drive from Boston to LA provided ample opportunity. Having studied arcade games intensely (yeah, in 1994 they were still relevant) we couldn’t help but notice that 2 or 3 of the leading genres had really begun making the transition into full 3D rendering.

Racing had, with Ridge Racer and Virtua Racing. Fighting, with Virtua Fighter. And gun games, with Virtua Cop. Racing was clearly 100% the better in 3D, and while Virtua Fighter wasn’t as playable as Street Fighter, the writing was on the wall.

Sensing opportunity, we turned to our own favorite genre, the character platform action game (CAG for short). In the 80s and early 90s the best sellers on home systems were dominated by CAGs and their cousins (like “walk to the right and punch” or “walk to the right and shoot”). Top examples were Mario, Sonic, and our personal recent favorite, Donkey Kong Country.

So on the second day of the drive, passing Chicago and traveling through America’s long flat heartland, fed on McDonalds, and accompanied by a gassy Labrador/Ridgeback mix (also fed on McDonalds), the idea came to us.

We called it the “Sonic’s Ass” game. And it was born from the question: what would a 3D CAG be like? Well, we thought, you’d spend a lot of time looking at “Sonic’s Ass.” Aside from the difficulties of identifying with a character only viewed in posterior, it seemed cool. But we worried about the camera, dizziness, and the player’s ability to judge depth – more on that later.

Jason, Andy & Morgan on arriving at Universal

Before leaving Boston we’d hired our first employee (who didn’t start full time until January 1995), a brilliant programmer and M.I.T. buddy of mine named Dave Baggett. We were also excited to work closely with Universal VP Mark Cerny, who had made the original Marble Madness and Sonic 2. In California, in 1994, this foursome of me, Jason, Dave, and Mark were the main creative contributors to the game that would become Crash Bandicoot.

We all agreed that the “Sonic’s Ass,” game was an awesome idea. As far as we knew, no one had even begun work on bringing the best-selling-but-notoriously-difficult CAG to 3D. Shigeru Miyamoto, the creator of Mario, was said to be working on Yoshi’s Island, his massive ode to 2D action.

But an important initial question was “which system?”

The 3D0 was DOA, but we also got our hands on specs for the upcoming Sega Saturn, the Sega 32X, and the mysterious Sony Playstation. The decision really didn’t take very long.  3D0, poor 3D power, and no sales. 32X, unholy Frankenstein’s monster – and no sales. Saturn, also a crazy hybrid design, and really clunky dev units. Then there was the Sony. Their track record in video games was null, but it was a sexy company and a sexy machine – by far the best of the lot. I won’t even bring up the Jaguar.

So we signed the mega-harsh Sony “developer agreement” (pretty much the only non-publisher to ever do so) and forked out like $35,000 for a dev unit.  Gulp.  But the real thing that cinched the deal in Sony’s favor though wasn’t the machine, but…

Before we continue to part 2 below, my parter and friend Jason Rubin offers the following thoughts on this section:

Andy and I always liked trying to find opportunities that others had missed.  Fill holes in a sense.  We had done Way of the Warrior in large part because the most popular games of the time were fighting games and the new 3DO system didn’t have a fighting game on it.  Our decision to do a character action game on the PlayStation was not only based on bringing the most popular genre on consoles into the 3D, but also because Sega already had Sonic and Nintendo already had Mario.  Instead of running headlong into either of these creative geniuses backyard, we decided to take our ball to a field with no competition.

Filling a hole had worked to an extent with Way of the Warrior.  The press immediately used Way as a yardstick to make a comparison point against other systems and their fighting games.  This gave it a presence that the game itself might never have had.  And as a result, ardent fans of the system would leap to defend the title even when perfectly fair points were made against it.  The diagonal moves were hard to pull off because the joypad on the 3DO sucked?  No problem, said the fans, Way of the Warrior plays fantastically if you just loosen the screws on the back of the joypad.

Why couldn’t the same effect work with a character action game on PlayStation?

And remember, at the time these games were the top of the pile.  It is hard to look at the video game shelves today and think that only 15 years ago childish characters dominated it.  There were first person shooters on the PC, of course, but sales of even the biggest of them couldn’t compare to Mario and Sonic.  Even second tier character games often outsold big “adult” games.

It’s also easy to forget how many possible alternatives there were along the way.  Most of Nebraska was filled with talk of a game called “Alosaurus and Dinestein” which was to be back to the future like plot with dinosaurs in a 2d side scrolling character action game.  I still like the name.

The “Sonic’s ass” nomenclature was more than a casual reference to the blue mascot turned 90 degrees into the screen.  It defined the key problem in moving a 2d game into the third dimension:  You would always be looking at the characters ass.  This might play well (it had never been tried) but it certainly would not be the best way to present a character.

Our solution, which evolved over the next 2 years, was multi-fold.  First, the character would start the game facing the screen (more on this later).  Second there would be 2d levels that guaranteed quality of gameplay and a chance to see the character in a familiar pose allowing comparison against old 2d games.  And third, we would attempt the reverse of a Sonic ass level – the run INTO the screen – which became the legendary boulder levels. [ NOTE from Andy, more on that in part 4 ]

It may have been this very Sonic’s ass problem that caused Naka-san to “cop out” of making a true 3D game called Nights for Saturn.  I also believe, but have no proof, that he felt so unsure of the move to 3D that Sega didn’t want to risk Sonic on that first experimental title.  Instead they created a new character.  This lost Sega the goodwill that Sonic would have brought to the three way game comparison that eventually ensued.  That ended up working to our favor.

Of course Miyamoto-san did not have this problem.  He created a truly new type of character action game with Mario 64.  The controls and open world allowed you to see the character from all sides.  Eventually this proved to be the future of 3d Character games.  But at the time it had disadvantages.  More on that later.

The concept of making a mascot game for the PlayStation was easy.  The odds of succeeding were next to nil.  Remember, we were two 24 year olds whose biggest title to date had not reached 100,000 units sold!  But if there was something we never lacked it was confidence.

NEXT PART [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] PART 11 is brand new 08/13/11.

The index of all Crash posts is here.

And peek at my novel in progress: The Darkening Dream

or more on GAMESBOOKS/MOVIES/TVWRITING or FOOD

The Crash Bandicoot in-game model. His only texture was the spots on his back, but every vertex was lovingly placed by Jason

How do I get a job designing video games?

If I had a penny for every time I’ve been asked this question…

Game developers have only a few broad types of employees. Excluding administrative ones like office management, HR, and IT, broadly the team has Programmers, Artists, Sound Engineers, Game Designers, and Testers (some also have Producers, but at Naughty Dog we didn’t believe in them, so we distributed their work among the team leads). Of these jobs, only “Game Designer” is “purely creative” per se. Truth is, on a good team all game jobs are creative, but designers are alone in that they don’t have a craftsmany trade.

Except they do, because game design requires a lot of craftsmanship. The trick is, it’s not something you can have learned anywhere else but by making games.

Programers can write some other kind of application and demonstrate their coding skills. Artists can show off awesome models, animation, textures, lighting, sketches etc. Externally, at home or school, an artist can learn to use art tools to build good looking art. It can be seen. He can say, “I modeled all of that in 2 weeks, although my friend did the textures.”

Game designers have to learn on the job. While all good game designers LOVE video games, not all lovers of video games make good game designers. There are different sub-types of designer, and all of them require many specific skills and personality traits. Creativity, organization, obscene work effort, organization, creativity, organization, organization, cleverness, willingness to take a beating, willingness to stand up for and demand what you believe is good, grace to admit when you idea sucked ass.

So how do you learn this stuff? How do you demonstrate it to a prospective employer. Tough.

Some you learn by playing insane amounts of games. Better yet, you make games. But… unlike a programmer or artist, it’s kinda hard for a designer alone to make anything. So you need to hook up with a great artist friend and a great programmer friend and make something cool. There are school programs now for this too, but the projects don’t have the sustained scope, scale, brutality, hideous cruelty, pain, and near death quality that real game development has. No. Not even close, not even a tinsy bit.

An old method was to become a game tester, and hope that the brass would notice your organizational skills, creativity, etc and promote you to a junior designer position. Probably this will sometimes still work. It requires a lot of stamina and a high tolerance for day-old hot wings, dirty bare boy-feet, and stale crispy cremes. But then again, if you can’t stomach that stuff you don’t belong in games.

You could also try and grab some kind of coveted internship and try to prove yourself. Also requires extremely high self motivation. Then again, if you don’t have that than forget trying to be a game designer anyway.

Maybe the bigger companies take junior designers with no experience. At Naughty Dog we never did.

But it’s still possible with an artist friend and a programmer friend to make a cool iPhone / Flash / etc. game. Do it. Do it again. Do it again. Do it again. Do it again. When a couple of them are good, you’ll find a job.

NOTE: I originally posted this on Quora, and if you want to see the whole thread CLICK HERE.

Also, if you want to read more of my posts on Writing/Creating, CLICK HERE.

TV Review: Buffy the Vampire Slayer – part 1

Title: Buffy the Vampire Slayer

Creator: Joss Whedon

Genre: Comedic Teen Contemporary Fantasy

Watched: Winter 2004-05, Summer 2009, Winter 2010-11

Summary: Best TV show of all time.

 

As a diehard vampire fan I saw the movie version of Buffy when it came out. I hated it so much I used to mock it as my pre Twilight example of lame vampires. I have this requirement that vampires need to be menacing, even if comic (Fright Night) or romantic (Interview with the Vampire). The Buffy movie undead were just flaccid.

When the TV show debuted, I was in the midst of the busiest year of my life, the year of Crash Bandicoot 2, when I was in the office every single day (7 days a week) between New Years and September 8th. Besides, the movie had been dumb. So the show even became a punching bag of mine (although I hadn’t seen it at the time) used to illustrate Hollywood’s creative drought: Hey, they’d made a show based on a terrible movie that hadn’t even made much money.

Oh, how wrong I was.

Finally, in November of 2004, after having “retired” from Naughty Dog, my wife having insisted for years that the show was good, I succumbed and ordered the first season on DVD. Thus began an obsessive binge where I watched all seven seasons, plus five of Angel, back to back over the next three months. Generally I consumed three or more a day, including watching 18 episodes of season 3 in one continuous sitting (home Sunday with a cold). My only breaks were the week back east for Thanksgiving and three weeks we spent in Sicily (yum!). Four and a half years later I re-watched all seven Buffy seasons during the summer of 2009. It was almost as good the second time, and I appreciated it more.

Despite a significant cheese factor, and a first season that suffers from being overly episodic, the show is absolutely brilliant. If you aren’t a fan you probably think, “Buffy has these weird obsessive fans, but that kind of thing isn’t for me.”

It is.

I’ve never met anyone who’s sat down and started watching from the beginning who doesn’t absolutely love the show. But that’s just it, you have to start from the beginning. Fundamentally the show blends fantastic writing, really funny dialog, off-beat but likable characters, zany and intricate mythology, a creativity with the TV medium, and quirky humor with a kind of hidden dark realism found in only the best dramas. By disguising drama with humor and the supernatural the writers are able to get at real human issues without freaking out the network, and because they’ve created characters we care about, it all works.

The casting too is inspired. Sarah Michelle Gellar is perfect as Buffy. She may be cute, blonde, and perky, but she isn’t a typical airhead. She combines practical cleverness, toughness, and hidden vulnerability, with a strong sense of duty. Fundamentally the show is about the weight that rests on her narrow shoulders, and what it takes to bear it. The rest of the core team is great too. Alyson Hannigan‘s Willow is every geek’s fantasy, the shy computer nerd who learns to kick ass, Nicholas Brendon‘s Xander provides the token maleness with more humor than testosterone, and Anthony Stewart Head‘s Giles is pitch perfect as the stuffy older advisor with a dark past.

But it’s not just the premise that makes this show rock, but what the writers do with it. I’ll explain when I CONTINUE IN PART 2…

The whole post series [1, 2, 34, 5, 6]