So you want to be a video game programmer? – part 5 – The Method

…CONTINUED from PART 4. Or start at Part 1.

This post is presents an algorithm of sorts for learning to program. It applies not only to the fundamentals, but to all aspects, including the acquisition of small component skills. Thirty years after learning, I still follow the same basic procedure. To tell the truth, modified, it works for leaning most things.

Step 1: Goal. Invent some manageable goal that excites you (in a later context as a profession “excites” is often replaced/supplemented by “need”). My first program was a text-based dungeon master (see here). If you want to be a video game programmer, there’s nothing better than a game. If it’s one of your first programs, make it damn simple. Copy some REALLY REALLY old and simple game like anything from before 1981 (PongBreakout, etc.). Truth be told, using text only for a couple weeks/months might not be a bad idea. Graphics just complicate matters. They’re awesome — and you’ll need them soon enough — but first the fundamentals, like variables, flow of control, scope, etc. Any individual task should take no more than a few days. If your goal is bigger than that, subdivide.

Step 2: Environment. All programming is done in the context of some environment, and you must learn about it. You need to start with a simple one. In my case it was mostly AppleSoft BASIC. For learning interpreted is good. Some decent starter environments today are Python, Ruby, Flash, Lua. DO NOT START WITH A LANGUAGE LIKE C. I will elaborate on this environment question in a separate full post, as it’s a large topic and highly religious for programmers.

Step 3: Research. This means reading. If you don’t like to read, either learn to or find yourself a new career. I’m serious. Reading separates the Neo Cortexs from the gibbering marsupials. I’m serious. Be a New Cortex. Your love of reading must be so extreme that you can stomach slogging through 900 page Library Reference Manuals (maybe not at first). Programming is full of details.

Step 4: Theory. Get out a pad of paper, a text file, Evernote, or whatever. Design what you are going to do. Later, you might or might not skip this step (and do it in your head), but it’s useful for the beginning programmer. You don’t need to write out the entire program, but you should design your data-structures and modules or functions. If it’s one of your first programs, you should hardly HAVE data-structures. You might instead write down the modes and loose flow chart between them.

Step 5: Code. Actually try coding your program. This is best done in an iterative way. My advice is generally to start with creating your core data-structures, and then the functions or methods that support them. Test each of these individually. Interpreted languages with a listener are the best because you don’t have to write test suites, but can just test the components as you go at the listener. Time spent debugging individual functions and groupings (say all the methods that belong to a data-structure) pays for itself 100-fold. I still do this. The less code you are testing, the easier it is to spot and find bugs. If you know that your functions are reliable (or semi-reliable) they provide robust building blocks to construct with.

Step 6: Debug. See above in “code” because they are heavily intertwined. Coding and debugging happens together in small loops. Again. The less NEW code you have to debug, the better. Debugging is hard for novices. Do not write an entire big program and debug it all at once. If you are using a language that syntax checks, check each function after you have written it. Fix the syntax errors (typos) and then test and debug the single function (or component of a program). Baby steps. Baby steps.

Step 7: Iterate and improve. Just keep adding things to your program to get it to where you want. Add a new feature. Improve an old one. Rip out some system and replace it. Add graphics. Upgrade them. Try to keep each of these changes as small as possible and test after each change. The longer it has been since it ran, the harder it will be to make it run.

_

I can not emphasize how important baby steps are. They are the key to avoiding fatal frustration. I have a law that helps define the size of subtasks: DO NOT EVER LEAVE THE COMPUTER IF YOUR PROGRAM DOES NOT RUN. You can take a piss or stretch. That’s it. I lived by this rule my entire programming career. You can’t always follow it, but try. Get your ass back in that chair. Mom wants you for dinner. Shrug. Your co-workers call you for a meeting. Snarl. I always think of a program like a car engine. You can sometimes merely tune it up, but a lot of times you have to take apart the engine to fix/add something new. That time when the engine is apart (the program does not RUN!) is very important, and should not be very long. If it is, you are not subdividing your tasks enough. I write all sorts of custom code to allow the engine to run again (even if in a half-assed way) while big changes are going on. These intermediate constructs are intended as throw-aways. But they save time. Having your program broken, writing more than a couple hours of new code that has not been tested, is a recipe for disaster. You could easily reach the point where you have no idea where the problem is. If you test in small bits as you go debugging is MUCH easier. Bugs are perhaps 80% likely in the most recently stuff. It’s the smoking gun you goto (haha) first.

You can do a lot with ASCII graphics!

A starter example of this whole process: My first game was a text based D&D type RPG game. I wanted to include a number of “cool” (to a 10 year-old) encounters. So I structured it as follows: There was the “character.” This was to be just a number of global variables (this is long before object oriented programming) like G (gold), HP (hitpoints) etc. I wrote a couple “methods” (functions – but they didn’t have names in BASIC, just line numbers) like “takes a hit.” This subtracts from HP, and if <= 0 branches to the “you are dead” part of the code (not really a function in those days). Then I wrote a number of “encounters.” These were the main flow of control in that program. It popped from encounter to encounter. They might be like: You have met an orc. draw orc on screen with text graphics (aka print statements). present options: “attack,” “run,” “use magic,” etc. wait for input and apply logic. If you are still alive send the player back to the main navigation loop (the place that doesn’t have a particular encounter).

That’s it. I expanded the program by doing things like: Adding more encounters. Adding resurrection as a pay option when you died. Adding an actual map to the main loop. Moving the “combat” logic from individual encounters into a function. Then adding to the character attributes like strength and dexterity which influenced combat. Beefing up character creation. Etc etc. These are all tasks that can individually be accomplished in a few hours. This is key. It keeps your program running most of the time. It provides good feedback on what you are doing.

The entire above “goal” -> “debug” loop can be repeated endlessly. Example: “add a save game.” You now have to save and restore the state of your player (various global variables). But to where? Disk presumably in those days. So you crack the BASIC manual and read about file I/O. First you go simple. There is one save game. It’s always named “adv.sav”. You write a function to open the file and write the vars into it. You examine the file to make sure it put them there the way you want to. You write another function to read the file. You add options to the game menu to call these functions. Then test.

Next baby step. Allow multiple save games. You add “filename” (or save slot or whatever) to the load/save functions. You hardwire it to something and test again. Then you add interface to the game’s main menu to specify which slot. You test that.

Iteration is king! Good luck.

_

Parts of this series are: [WhyThe SpecsGetting Started, School, Method]

Subscribe to the blog (on the right), or follow me at:

Andy:  or blog

Or more posts on video gaming here.

And what I’m up to now here.

So you want to be a video game programmer? – part 4 – School

…CONTINUED from PART 3. Or start at Part 1.

There are two basic approaches: home training and school. Personally I’d recommend both.

Let’s talk school. In my day (1980s) pre-collegiate computer classes barely existed, and if they did they were mostly about Pascal programming and data-structures. They often used p-System pascal, an old-school predecessor to Java!

College Computer Science programs followed (and I imagine they still do) a traditional regimen of stuff like Algorithms, Data-structures, Architecture, Compilers, AI, Theory of Computation etc. They rarely taught or emphasized programming itself. Personally, while I have this training myself (several years at the M.I.T. AI Lab working toward my PhD) I got it long after teaching myself to program and after having 5 published video games on the market.

What I was taught at M.I.T. (1992-94) was way too theoretical to make a good starting place for a young programmer. Don’t get me wrong, I learned a tremendous amount there and it really upped my game. But it was best digested in light of several years practical experience. So I don’t personally think that traditional CS is the way to start. But if you are really serious about computers it is a very solid choice for your higher education. You just need to be ready for it.

And here is the dirty secret about the University Education system: It’s made up of classes. Yep. Your four (or more) year educational experience will just be the summation of eight semesters worth of classes, usually 4-5 per semester. The exact order of these, which topics, and how they are taught will be at the whim of all sorts of varied factors. For example: scheduling, major and general requirements, teacher sabbaticals, friends, personal choice, etc. The school itself will have broad requirements (like you must have 3 science and 2 history classes). You major/department will have more specific ones (like requires 14 classes in the major, with 7 out of the 10 “core” classes — as defined by the department). So everyone’s education is different. That can be a good thing, but it’s less coherent.

And even within a particular class type, like say: Computer Architecture, the classes vary wildly and are rarely designed to work with each other or be taken in a particular order. The school and department might have determined that it should have a Computer Architecture class, but each teacher is free (somewhat) to determine the specific content and style of his or her class. Teachers vary wildly in teaching ability. I mean WILDLY! Even at the best schools. In fact, the teaching quality at M.I.T. was considerably lower than at my undergraduate school, Haverford College. It’s not that the M.I.T. professors weren’t as smart — they were plenty brilliant — but they leaned more toward being famous researchers while Haverford selected people who excelled first and foremost at undergraduate education.

In any case, even within a particular major, say Computer Science, the slate of courses you take might not form a coherent picture. There isn’t much effort made to ensure this. It’s more like, “we need a Compilers course, who wants to teach it?” and then that professor goes off and builds their plan. I’m sure there are constraints and feedback, but it being part of a single coherent program doesn’t seem to be one of them. And teacher style so heavily influences the experience. Now, don’t get me wrong, many of these courses are really good. But they require that you, the student, do a lot of the work integrating the bigger picture. Which really, for first rate minds trying to absorb advanced modalities of thought, is totally fine. It’s just not exactly the same as learning a complex practical field like programming.

But let me speak briefly about the classic topics:

Theory of Computation – Is the cool (but highly esoteric) field of math that endeavors to prove things about what can and not be computed. It includes a lot of discussions about theoretical computers like the Turing Machine and what sorts of computational problems are equivalently complex. This is actually very useful, but only if you have already encountered practical programming tasks. Otherwise it will probably just confuse the bejesus out of you.

Compilers – Is about writing compilers, and how computational semantics are transformed. This is bordering on totally useless for the novice programmer. I myself found it fascinating, but I wrote several compilers. Again, you want to study this several years into your career.

Algorithms – Is the formal study of different methods of problem solving. This is where stuff like the difference between a bubble and an insertion sort goes. Every programmer should know the basic algorithms, but you can read a beginning book fairly early in the learning process and pickup the basics. The college version is much more rigorous. But in the early stages you can lean on libraries which encapsulate these solutions.

Data-structures – These relate closely to Algorithms, but are methods for actually storing data in computers. Different data-structures lend themselves better to different algorithms. The mistake made by a purely academic approach is in thinking that they make a lot of sense without some practical knowledge of the kind of things that you do in normal computing. Still, Algorithms and Data-structures are essential at all levels of programming beyond the totally trivial, and these are the most practical of the classic topics.

AI (Artificial Intelligence) – Can be extremely useful to the game programmer. Games, after all, need enemies that appear intelligent, and in addition have to solve all sorts of big computational problems which use AI techniques (like moving the camera around etc.). But as taught in school it’s pretty theoretical and you need at least a couple years of practical skills first.

Architecture – Is the study of computer hardware, usually micro-processors. A lot of people hate this topic, not being hardware guys. And although you can learn this anytime, you really should. It’s impossible to be a truly great programmer without knowing something about the hardware that makes it all happen. If you are into compilers, this is even more true. I personally loved these classes.

I also want to mention the subject of Programming Languages. Most schools rightfully view the choice of specific programming language as fairly “academic” (or not actually). In the above classes advanced CS guys learn that all normal computer languages are “Turing Complete” and therefore equivalent to each other. Any program in one could be converted to a program in another by automated means (this is what compilers do). Languages all have the same basic features. And if it’s missing one you can write the feature within itself. So who cares which one you use?

This makes a certain academic sense, but in practice, the choice of programming language is vital. And the budding programmer should be introduced to a wide variety of them at a steady yet-not-overwhelming pace so that they learn the fundamentals common to all and do not become one of those lame-ass programmers who are afraid to learn a new programming language. I can be programming in any new language in one day, proficient in a week, expert in a month, master in six. It’s just not that hard.

Schools often have a particular language that they favor. These days it might be Java. In my era it was either Pascal or C. For many schools it’s still probably C/C++. At M.I.T. it was Scheme/Common Lisp! But often professors are also free to just teach a class in a different language. I had an undergraduate AI class all in Prolog. For the gifted student this is a good thing, having a whole class in a new language, as it’s a decent enough emersion to actually learn a new mode of thought (the Prolog class substantially improved my programming even though I’ve not used Prolog since). But also some professors will try the new language for each assignment approach, which is retarded, as there isn’t enough time or depth to master anything, and so the whole assignment becomes about learning the minimum information needed to get it done. The net net is that there is rarely a coherent plan to get you programming and then to have you learn a wide range of practical languages. In said plan, you might start learning with an easier interpreted language like Python, then be taught to master four or five others that are both practical and varied (say C/C++, Java, Javascript, at least one assembly, a “fancy” or two like Ruby/LISP/Scheme/Prolog/Smalltalk etc.) That doesn’t usually happen. You might get lots of Java and a smattering of 10 others.

College professors also don’t usually think that classes that directly and specifically teach programming languages and practical programming are very cool. There is no research or terribly theoretical aspect to them. I.e. the subject isn’t very academic. They are rarely themselves very good programmers (if they were, they’d be off working for Google or whatnot 🙂 but seriously the personality type for “programmer” and “professor” are different — albeit both bookish). This leads to professors rarely adding this kind of class to the curriculum unless someone makes them.

_

Having heard about all these more practical Gaming majors that colleges now have, but which I know nothing about (they didn’t exist 20 years ago), I asked a friend of mine who just finished her CS degree yesterday! Lauren is a fellow blogger, programmer, WOW fan, and budding game designer-programmer. Big congratulations! Her comments are in blue:

Having just completed my degree yesterday, I can confirm that not as much has changed in Computer Science education as one might expect, especially given the exponential growth of the field. Aside from the specific languages taught, which for me was mainly Java instead of Pascal, the curriculum is much the same. The breadth of languages taught is still very much dependent on what you choose to seek out yourself; were it not for honors opportunities or research, I never would have become as familiar as I am with functional programming or the MVC architecture.

After the first two years, programming takes a back seat to theory; upper division classes, while useful and offering a degree of specialization, can be light on actual coding. There are still opportunities to improve your skills, though. Project classes, at least at my school, offer a chance to really show your programming chops, so to speak; with the exception of one I personally considered, all required the completion of extensive coding projects in ten weeks or less to the exclusion of lecture material.

The biggest factor that affected the quality and extent of the education I received was the professors. Sometimes, you will get a truly horrible lecturer, someone who isn’t fair or just doesn’t care. For me, this happened more often than not. The best advice I can give is: Be able to teach yourself. To be honest, I didn’t bother attending classes where the professor was incapable of teaching — I don’t want to waste my time. I went home and read the textbook, or taught myself using tutorials or information online.

“Bad” classes will happen, and the most important thing I learned in college, or even before, is that you need to take active control over your education. Even if the teacher sucks, you can’t blame a failing grade on him; you have the power to learn the material and should do so to the best of your ability. This isn’t to say that poor professor performance doesn’t raise my hackles (it does, a lot), just that self-directed learning is a necessity for succeeding as a student and a programmer, especially since the number of future employers that will accept “The teacher sucked!” as an excuse for a failing grade must be pretty small.

Even if you’re taking the so-called “structured” or “formal” education path, no one will hold your hand. You need to look out for yourself, and find opportunities to broaden your knowledge. I learned firsthand that often these opportunities will not be supplied to you, or even pointed out. You need to be responsible for your own education, especially at large universities. Self-directed study and college are not mutually exclusive.

In that spirit, in addition to my CS degree, I also took a Concentration in Game Culture and Design. This was an interdisciplinary program in conjunction with the art school which, did add a nice game “focus” to my studies. I think these types of programs can be helpful, though to say this improved my coding skills would be more than a stretch. Mostly, it gave me a bit more insight into the game pipeline, and the scale of the work that goes into making a game. I’ve gained some skills which I otherwise wouldn’t have been exposed to; for instance, I’m now comfortable finding my way around game design docs and I’ve had practice giving pitches.

While not a value traditionally espoused as part of a CS education, some gaming or art courses can help your creativity. I can’t speak for the more technical games programs out there, I think there is merit in learning a bit about the industry even prior to leaving school.

This fresher opinion confirmed my belief that no school can be as rigorous as GOOD self training like I gave myself, and under no circumstances should you want until you’re 18 (unless you already are!).

The basic message: Start as early as you can, preferable at age 8-12.

Given that college is roughly age 18-22, and adds a lot of value an education begun at home, it can actually dovetail perfectly with said self education. This will be the topic of a later post in this series.

CONTINUED HERE with The Method!

_

Parts of this series are: [WhyThe SpecsGetting Started, School, Method]

Subscribe to the blog (on the right), or follow me at:

Andy:  or blog

Read more from guest blogger Lauren here.

Or more posts on video gaming here.

And what I’m up to now here.