Website Upgrades Coming

Since I’m waiting for both my line edits on Untimed and my proofreading on The Darkening Dream, I’m researching website construction. I have to morph, upgrade, or supplement this blog with a genuine author website and I can’t bring myself to hire someone to do it given that I’ve written far bigger and more complex websites and apps. It’ll just annoy me to no end to not be able to control it all myself. But at the same time, old school coding the whole thing manually (I’d probably use Ruby on Rails for that) is overkill and probably too much work.

So I read this Professional WordPress book yesterday and today to see if it would be reasonable to just extend WordPress. I think it is. A few plugins and some custom theme programming will probably do the trick. The problem is that I host on wordpress.com and they don’t allow you to install extra plugins (they have some installed by default) or add any PHP code. So I’ll have to migrate to a self hosted server. Maybe Media Temple VPS? Rackspace? Research. Research. Research. Anyone have any suggestions/experience with the good hosting platforms?

And I have to teach myself PHP, so I grabbed the bird book. PHP is one of those popular but slightly icky languages like PERL and JAVA that I’ve never been very partial to. It’s like Ruby, but 100x uglier and more primitive. I am a LISP (and Ruby) guy after all. Oh well. This is easy peasy programming, so I’ll just suck it up. Using something like WordPress will make my life much easier maintaining the site as it’s choke full of content management features. If I program it myself I have to go and code everything manually, which really isn’t very efficient.

I even wonder if one of these newer WordPress themes/frameworks like PageLines Platform isn’t a good idea. Anyone use one?

More to come as I get into it.

Lispings ala John McCarthy

Yesterday, John McCarthy, the inventor of the LISP programming language passed away. It’s been a bad month for computer guys. First Steve Jobs, then Dennis Ritchie, creator of C, now McCarthy.

LISP was one of my early great loves in programming languages. I learned it first in college (prepping for an AI course). Perhaps it was the intensely well written “LISP Bible” Guy Steele’s Common LISP: The Language or Paul Graham’s mind opening On Lisp. Then at MIT my advisors were Patrick Winston (author of LISP) and Rod Brooks. No surprise I took to it. From Paul Graham I learned that programing languages didn’t need to be static, but could become fluid langages shaped to the specific domain or your task. From Rod I learned to roll my own. 

For nearly two decades I was a diehard LISP advocate. I even forced all my programers to code three Crash Bandicoot and four Jak & Daxter games in custom LISP dialects that I wrote the compilers for. This started with our fighting game Way of the Warrior (Rod Brooks voices a character in the game) where I used MacCL to create a LISP syntax state machine to pcode compiler. Then from Crash Bandicoot a much more elaborate language for coding all the gameplay objects called GOOL. In Jak & Daxter I went full on crazy and wrote a native compiler for an object oriented full featured Scheme language called GOAL. We wrote 98% of four Jak & Daxter games in it, including the vector unit assembly.

One of the interesting things about LISP is that it’s actually a pretty easy language to parse, interpret, and compile. This isn’t actually an accident as the S-expression syntax was initially chosen for it’s machine regularity (in those early days of underpowered mainframes). Newer languages are syntactically much more complicated. Ironically most normal programmers, being human, seem to find the more complicated syntax easier and the “simple” S-expression syntax confusing (being backward much of the time to normal human convention). I always found it unambiguous, but go figure. It’s also precisely this regularity that makes the awesome macrology of LISP possible and has allowed the language to remain relevant despite its advanced age.

But by the mid 2000s I started doing the kind of programming I used to do in LISP in Ruby. It’s not that Ruby is a better language (although it is a good one), but mostly it was the momentum factor and the availability of modern libraries for interfacing with the vast array of services out there. Using the crappy unreliable or outdated LISP libraries — if they worked at all — was tedious. Plus the LISP implementations were so outmoded. It was very hard to get other programers (except a couple enthusiasts) to work that way. And ugh, those old CMCL and ACL Garbage Collection code/algo’s were (at least when I last used them in 2006) so awful. In ACL I’d get these LispMachine-like multi-hour GCs.

Ruby had a great book (I put big stock in that) and struck a decent compromise. It’s type system and object model are better (or at least more modern) than Common LISP anyway. The syntax is more inconsistent, and the macro model nowhere near as good. In Ruby you can manually build up strings and feed them into the interpreter, which is equivalent to simple backquote. But you can’t easily do the kind of cool nested constructions that are trivial in LISP.

But it turns out. Libraries and implementation matter a lot. Momentum too. Ruby has momentum, people supporting it who aren’t older than me (and I’m not a young programmer anymore, started in 1980!) Still, you can feel lots and lots of LISP influence in all the new runtime typed languages (Ruby, Python, etc). And 30 years later, listeners still rule! Using a language without a listener is like walking without legs. I pity the C, C++, Java only type programmer.

For more info on my video game career, click here.

For what I’m up to now, click here.

 

Foodie Photography 101

As is fairly obvious from my umpteen food reviews, I take a lot of pictures of food. For such a static target, for a number of reasons, the plate isn’t so easy to photograph. Mostly this comes down to light and distance. Restaurants are often dark and food is fairly small and right in front of you. This distance factor throws it into the realm of macro photography (subjects at very near distances).

I use three different cameras. I’ll go over them all here, in ascending order of size, weight, and quality. As a general rule of thumb the bigger and more expensive a camera is, the better the pictures. It’s also worth noting that all food photos below were processed in Adobe Lightroom and are not “as shot”. I’ll discuss this at the bottom of the post.

The cellphone camera is ubiquitous these days, but for me only an option of last resort.


This sushi pic is about as decent as a good (iphone 4) camera will take, and even with post-processing, that isn’t very good.


And in a dark restaurant, you’re stuck with these hideous flash shots. The flash on these tiny camera has a range of about a foot and an ugly falloff. If you have to use the cellphone, try and hold it very steady, and home it’s lunch time and the window is behind you (keep the light behind the lens).


Next up, and pretty acceptable, is a snapshot camera that is good at macro photography. I use a Canon S90. This is older and has been replaced by the S95 and S100. Any of the three are good, the newer ones are better. They are among the only small cameras to shoot in RAW mode and to focus well at short distances. The S90 is small enough to pocket and I use it for casual meals.


A typical flash shot from the S90. It’s not bad. The camera has a small aperture and hence a very large depth of field which makes for easy focusing (but a flat look). It’s very useful to zoom the camera in and pull physically back so the flash doesn’t get too close to the food and easily blow out the image (overexpose).


My third camera is my “real” camera, the amazing Canon EOS 5D Mark II. But any Canon or Nikon DLSR should do fairly well. While the DSLR is much larger and heavier, it takes a MUCH better picture. Not only is the resolution higher but it handles low light far better. Still, shooting food with a SLR isn’t easy.


This is a typical bad result. A normal lens can’t focus on something less than two feet away and so you have to step way back. Without a flash (and a normal flash doesn’t work well on food) you can easily end up with a soft image like above.


The solution to this distance problem is a macro lens. I usually use the Canon EF 50mm f/2.5 Compact Macro. This is a very sharp prime lens (on a full frame camera 50mm is good for food, it might be even better on the more common crop sensor cameras, but you might have to pull back a bit). This lens is even cheap for a macro at $284, as many of Canon’s macro lens are two or four times that. It’s only problem is the non-USM focusing that’s slow as a dog. Food, fortunately doesn’t move.


But in a very dark restaurant (and despite the appearance of this color and exposure corrected photo, Pizzeria Mozza is very dark) one ends up at f2.5 and a high ISO. Combine that with the very short distance to the plate and you get an incredibly small depth of field. Hence, crust in focus, pizza blurry.


This and the light problem are nicely solved by the bulky Canon MR-14EX Macro Ring. Ideally, you’d want a light box (a big soft glowing box) but this is not practical in restaurants :-), but the white LED light from the flash ring is less directed than a regular flash (which will also do in a pinch).


It makes for a honking big rig, but with the macro lens and the TTL flash exposure adjustment it takes great close up pictures in a pitch black room (the flash can be used as a focus light too).


Hence this lovely photo, with just enough depth of field to give the dish some character and depth. Still, you have to watch the distance and f-stop, even with the flash, but you don’t have to pump the ISO up as high as without it.

Which finally brings me to Lightroom. Significant discussion of post processing is outside the scope of this post. Photoshop and many other products can allow you to clean up your images, but none do it as easily and quickly as Lightroom. Going through a 60 photo meal can be tedious, but with Lightroom you can do a decent job in five minutes, then quickly batch upload via a vast array of plugins.


To give you an idea how important this is. Check out this image right out of the camera, taken using the 5D and the macro lens, but no flash.


With light three clicks I fix the white balance, the exposure, and correct for lens aberrations. I see so many food bloggers uploading dim orange photos. There’s just no need.

Hopefully this quick little tutorial helps you get the most out of your food photos. Even if you don’t have a big fancy camera, the trick of pulling back and zooming in with a snapshot flash helps both exposure and dealing with the “too close to focus” problem.

Find all of my food reviews here.

Steve Jobs is Dead :-(

The creator of so many of our toys and tools has passed away. He may have been controversial, kooky, and perhaps a pain to work for, but he was undeniably brilliant and touched all our lives.

Probably his fundamental product genius was the ability to always look at what the product could do better for the user, regardless of how previous products did it, internal momentum, or how difficult it might be to achieve.

My Apple based “origin story” as a programmer and computer guy is here.

More newsy info here.

No one wants to die. Even people who want to go to heaven don’t want to die to get there. And yet death is the destination we all share. No one has ever escaped it. And that is as it should be, because Death is very likely the single best invention of Life. It is Life’s change agent. It clears out the old to make way for the new. Right now the new is you, but someday not too long from now, you will gradually become the old and be cleared away. Sorry to be so dramatic, but it is quite true.

Your time is limited, so don’t waste it living someone else’s life. Don’t be trapped by dogma — which is living with the results of other people’s thinking. Don’t let the noise of others’ opinions drown out your own inner voice. And most important, have the courage to follow your heart and intuition. They somehow already know what you truly want to become. Everything else is secondary.

Steve Jobs, Stanford commencement speech, June 2005

/sigh

So you want to be a video game programmer? – part 5 – The Method

…CONTINUED from PART 4. Or start at Part 1.

This post is presents an algorithm of sorts for learning to program. It applies not only to the fundamentals, but to all aspects, including the acquisition of small component skills. Thirty years after learning, I still follow the same basic procedure. To tell the truth, modified, it works for leaning most things.

Step 1: Goal. Invent some manageable goal that excites you (in a later context as a profession “excites” is often replaced/supplemented by “need”). My first program was a text-based dungeon master (see here). If you want to be a video game programmer, there’s nothing better than a game. If it’s one of your first programs, make it damn simple. Copy some REALLY REALLY old and simple game like anything from before 1981 (PongBreakout, etc.). Truth be told, using text only for a couple weeks/months might not be a bad idea. Graphics just complicate matters. They’re awesome — and you’ll need them soon enough — but first the fundamentals, like variables, flow of control, scope, etc. Any individual task should take no more than a few days. If your goal is bigger than that, subdivide.

Step 2: Environment. All programming is done in the context of some environment, and you must learn about it. You need to start with a simple one. In my case it was mostly AppleSoft BASIC. For learning interpreted is good. Some decent starter environments today are Python, Ruby, Flash, Lua. DO NOT START WITH A LANGUAGE LIKE C. I will elaborate on this environment question in a separate full post, as it’s a large topic and highly religious for programmers.

Step 3: Research. This means reading. If you don’t like to read, either learn to or find yourself a new career. I’m serious. Reading separates the Neo Cortexs from the gibbering marsupials. I’m serious. Be a New Cortex. Your love of reading must be so extreme that you can stomach slogging through 900 page Library Reference Manuals (maybe not at first). Programming is full of details.

Step 4: Theory. Get out a pad of paper, a text file, Evernote, or whatever. Design what you are going to do. Later, you might or might not skip this step (and do it in your head), but it’s useful for the beginning programmer. You don’t need to write out the entire program, but you should design your data-structures and modules or functions. If it’s one of your first programs, you should hardly HAVE data-structures. You might instead write down the modes and loose flow chart between them.

Step 5: Code. Actually try coding your program. This is best done in an iterative way. My advice is generally to start with creating your core data-structures, and then the functions or methods that support them. Test each of these individually. Interpreted languages with a listener are the best because you don’t have to write test suites, but can just test the components as you go at the listener. Time spent debugging individual functions and groupings (say all the methods that belong to a data-structure) pays for itself 100-fold. I still do this. The less code you are testing, the easier it is to spot and find bugs. If you know that your functions are reliable (or semi-reliable) they provide robust building blocks to construct with.

Step 6: Debug. See above in “code” because they are heavily intertwined. Coding and debugging happens together in small loops. Again. The less NEW code you have to debug, the better. Debugging is hard for novices. Do not write an entire big program and debug it all at once. If you are using a language that syntax checks, check each function after you have written it. Fix the syntax errors (typos) and then test and debug the single function (or component of a program). Baby steps. Baby steps.

Step 7: Iterate and improve. Just keep adding things to your program to get it to where you want. Add a new feature. Improve an old one. Rip out some system and replace it. Add graphics. Upgrade them. Try to keep each of these changes as small as possible and test after each change. The longer it has been since it ran, the harder it will be to make it run.

_

I can not emphasize how important baby steps are. They are the key to avoiding fatal frustration. I have a law that helps define the size of subtasks: DO NOT EVER LEAVE THE COMPUTER IF YOUR PROGRAM DOES NOT RUN. You can take a piss or stretch. That’s it. I lived by this rule my entire programming career. You can’t always follow it, but try. Get your ass back in that chair. Mom wants you for dinner. Shrug. Your co-workers call you for a meeting. Snarl. I always think of a program like a car engine. You can sometimes merely tune it up, but a lot of times you have to take apart the engine to fix/add something new. That time when the engine is apart (the program does not RUN!) is very important, and should not be very long. If it is, you are not subdividing your tasks enough. I write all sorts of custom code to allow the engine to run again (even if in a half-assed way) while big changes are going on. These intermediate constructs are intended as throw-aways. But they save time. Having your program broken, writing more than a couple hours of new code that has not been tested, is a recipe for disaster. You could easily reach the point where you have no idea where the problem is. If you test in small bits as you go debugging is MUCH easier. Bugs are perhaps 80% likely in the most recently stuff. It’s the smoking gun you goto (haha) first.

You can do a lot with ASCII graphics!

A starter example of this whole process: My first game was a text based D&D type RPG game. I wanted to include a number of “cool” (to a 10 year-old) encounters. So I structured it as follows: There was the “character.” This was to be just a number of global variables (this is long before object oriented programming) like G (gold), HP (hitpoints) etc. I wrote a couple “methods” (functions – but they didn’t have names in BASIC, just line numbers) like “takes a hit.” This subtracts from HP, and if <= 0 branches to the “you are dead” part of the code (not really a function in those days). Then I wrote a number of “encounters.” These were the main flow of control in that program. It popped from encounter to encounter. They might be like: You have met an orc. draw orc on screen with text graphics (aka print statements). present options: “attack,” “run,” “use magic,” etc. wait for input and apply logic. If you are still alive send the player back to the main navigation loop (the place that doesn’t have a particular encounter).

That’s it. I expanded the program by doing things like: Adding more encounters. Adding resurrection as a pay option when you died. Adding an actual map to the main loop. Moving the “combat” logic from individual encounters into a function. Then adding to the character attributes like strength and dexterity which influenced combat. Beefing up character creation. Etc etc. These are all tasks that can individually be accomplished in a few hours. This is key. It keeps your program running most of the time. It provides good feedback on what you are doing.

The entire above “goal” -> “debug” loop can be repeated endlessly. Example: “add a save game.” You now have to save and restore the state of your player (various global variables). But to where? Disk presumably in those days. So you crack the BASIC manual and read about file I/O. First you go simple. There is one save game. It’s always named “adv.sav”. You write a function to open the file and write the vars into it. You examine the file to make sure it put them there the way you want to. You write another function to read the file. You add options to the game menu to call these functions. Then test.

Next baby step. Allow multiple save games. You add “filename” (or save slot or whatever) to the load/save functions. You hardwire it to something and test again. Then you add interface to the game’s main menu to specify which slot. You test that.

Iteration is king! Good luck.

_

Parts of this series are: [WhyThe SpecsGetting Started, School, Method]

Subscribe to the blog (on the right), or follow me at:

Andy:  or blog

Or more posts on video gaming here.

And what I’m up to now here.

So you want to be a video game programmer? – part 4 – School

…CONTINUED from PART 3. Or start at Part 1.

There are two basic approaches: home training and school. Personally I’d recommend both.

Let’s talk school. In my day (1980s) pre-collegiate computer classes barely existed, and if they did they were mostly about Pascal programming and data-structures. They often used p-System pascal, an old-school predecessor to Java!

College Computer Science programs followed (and I imagine they still do) a traditional regimen of stuff like Algorithms, Data-structures, Architecture, Compilers, AI, Theory of Computation etc. They rarely taught or emphasized programming itself. Personally, while I have this training myself (several years at the M.I.T. AI Lab working toward my PhD) I got it long after teaching myself to program and after having 5 published video games on the market.

What I was taught at M.I.T. (1992-94) was way too theoretical to make a good starting place for a young programmer. Don’t get me wrong, I learned a tremendous amount there and it really upped my game. But it was best digested in light of several years practical experience. So I don’t personally think that traditional CS is the way to start. But if you are really serious about computers it is a very solid choice for your higher education. You just need to be ready for it.

And here is the dirty secret about the University Education system: It’s made up of classes. Yep. Your four (or more) year educational experience will just be the summation of eight semesters worth of classes, usually 4-5 per semester. The exact order of these, which topics, and how they are taught will be at the whim of all sorts of varied factors. For example: scheduling, major and general requirements, teacher sabbaticals, friends, personal choice, etc. The school itself will have broad requirements (like you must have 3 science and 2 history classes). You major/department will have more specific ones (like requires 14 classes in the major, with 7 out of the 10 “core” classes — as defined by the department). So everyone’s education is different. That can be a good thing, but it’s less coherent.

And even within a particular class type, like say: Computer Architecture, the classes vary wildly and are rarely designed to work with each other or be taken in a particular order. The school and department might have determined that it should have a Computer Architecture class, but each teacher is free (somewhat) to determine the specific content and style of his or her class. Teachers vary wildly in teaching ability. I mean WILDLY! Even at the best schools. In fact, the teaching quality at M.I.T. was considerably lower than at my undergraduate school, Haverford College. It’s not that the M.I.T. professors weren’t as smart — they were plenty brilliant — but they leaned more toward being famous researchers while Haverford selected people who excelled first and foremost at undergraduate education.

In any case, even within a particular major, say Computer Science, the slate of courses you take might not form a coherent picture. There isn’t much effort made to ensure this. It’s more like, “we need a Compilers course, who wants to teach it?” and then that professor goes off and builds their plan. I’m sure there are constraints and feedback, but it being part of a single coherent program doesn’t seem to be one of them. And teacher style so heavily influences the experience. Now, don’t get me wrong, many of these courses are really good. But they require that you, the student, do a lot of the work integrating the bigger picture. Which really, for first rate minds trying to absorb advanced modalities of thought, is totally fine. It’s just not exactly the same as learning a complex practical field like programming.

But let me speak briefly about the classic topics:

Theory of Computation – Is the cool (but highly esoteric) field of math that endeavors to prove things about what can and not be computed. It includes a lot of discussions about theoretical computers like the Turing Machine and what sorts of computational problems are equivalently complex. This is actually very useful, but only if you have already encountered practical programming tasks. Otherwise it will probably just confuse the bejesus out of you.

Compilers – Is about writing compilers, and how computational semantics are transformed. This is bordering on totally useless for the novice programmer. I myself found it fascinating, but I wrote several compilers. Again, you want to study this several years into your career.

Algorithms – Is the formal study of different methods of problem solving. This is where stuff like the difference between a bubble and an insertion sort goes. Every programmer should know the basic algorithms, but you can read a beginning book fairly early in the learning process and pickup the basics. The college version is much more rigorous. But in the early stages you can lean on libraries which encapsulate these solutions.

Data-structures – These relate closely to Algorithms, but are methods for actually storing data in computers. Different data-structures lend themselves better to different algorithms. The mistake made by a purely academic approach is in thinking that they make a lot of sense without some practical knowledge of the kind of things that you do in normal computing. Still, Algorithms and Data-structures are essential at all levels of programming beyond the totally trivial, and these are the most practical of the classic topics.

AI (Artificial Intelligence) – Can be extremely useful to the game programmer. Games, after all, need enemies that appear intelligent, and in addition have to solve all sorts of big computational problems which use AI techniques (like moving the camera around etc.). But as taught in school it’s pretty theoretical and you need at least a couple years of practical skills first.

Architecture – Is the study of computer hardware, usually micro-processors. A lot of people hate this topic, not being hardware guys. And although you can learn this anytime, you really should. It’s impossible to be a truly great programmer without knowing something about the hardware that makes it all happen. If you are into compilers, this is even more true. I personally loved these classes.

I also want to mention the subject of Programming Languages. Most schools rightfully view the choice of specific programming language as fairly “academic” (or not actually). In the above classes advanced CS guys learn that all normal computer languages are “Turing Complete” and therefore equivalent to each other. Any program in one could be converted to a program in another by automated means (this is what compilers do). Languages all have the same basic features. And if it’s missing one you can write the feature within itself. So who cares which one you use?

This makes a certain academic sense, but in practice, the choice of programming language is vital. And the budding programmer should be introduced to a wide variety of them at a steady yet-not-overwhelming pace so that they learn the fundamentals common to all and do not become one of those lame-ass programmers who are afraid to learn a new programming language. I can be programming in any new language in one day, proficient in a week, expert in a month, master in six. It’s just not that hard.

Schools often have a particular language that they favor. These days it might be Java. In my era it was either Pascal or C. For many schools it’s still probably C/C++. At M.I.T. it was Scheme/Common Lisp! But often professors are also free to just teach a class in a different language. I had an undergraduate AI class all in Prolog. For the gifted student this is a good thing, having a whole class in a new language, as it’s a decent enough emersion to actually learn a new mode of thought (the Prolog class substantially improved my programming even though I’ve not used Prolog since). But also some professors will try the new language for each assignment approach, which is retarded, as there isn’t enough time or depth to master anything, and so the whole assignment becomes about learning the minimum information needed to get it done. The net net is that there is rarely a coherent plan to get you programming and then to have you learn a wide range of practical languages. In said plan, you might start learning with an easier interpreted language like Python, then be taught to master four or five others that are both practical and varied (say C/C++, Java, Javascript, at least one assembly, a “fancy” or two like Ruby/LISP/Scheme/Prolog/Smalltalk etc.) That doesn’t usually happen. You might get lots of Java and a smattering of 10 others.

College professors also don’t usually think that classes that directly and specifically teach programming languages and practical programming are very cool. There is no research or terribly theoretical aspect to them. I.e. the subject isn’t very academic. They are rarely themselves very good programmers (if they were, they’d be off working for Google or whatnot 🙂 but seriously the personality type for “programmer” and “professor” are different — albeit both bookish). This leads to professors rarely adding this kind of class to the curriculum unless someone makes them.

_

Having heard about all these more practical Gaming majors that colleges now have, but which I know nothing about (they didn’t exist 20 years ago), I asked a friend of mine who just finished her CS degree yesterday! Lauren is a fellow blogger, programmer, WOW fan, and budding game designer-programmer. Big congratulations! Her comments are in blue:

Having just completed my degree yesterday, I can confirm that not as much has changed in Computer Science education as one might expect, especially given the exponential growth of the field. Aside from the specific languages taught, which for me was mainly Java instead of Pascal, the curriculum is much the same. The breadth of languages taught is still very much dependent on what you choose to seek out yourself; were it not for honors opportunities or research, I never would have become as familiar as I am with functional programming or the MVC architecture.

After the first two years, programming takes a back seat to theory; upper division classes, while useful and offering a degree of specialization, can be light on actual coding. There are still opportunities to improve your skills, though. Project classes, at least at my school, offer a chance to really show your programming chops, so to speak; with the exception of one I personally considered, all required the completion of extensive coding projects in ten weeks or less to the exclusion of lecture material.

The biggest factor that affected the quality and extent of the education I received was the professors. Sometimes, you will get a truly horrible lecturer, someone who isn’t fair or just doesn’t care. For me, this happened more often than not. The best advice I can give is: Be able to teach yourself. To be honest, I didn’t bother attending classes where the professor was incapable of teaching — I don’t want to waste my time. I went home and read the textbook, or taught myself using tutorials or information online.

“Bad” classes will happen, and the most important thing I learned in college, or even before, is that you need to take active control over your education. Even if the teacher sucks, you can’t blame a failing grade on him; you have the power to learn the material and should do so to the best of your ability. This isn’t to say that poor professor performance doesn’t raise my hackles (it does, a lot), just that self-directed learning is a necessity for succeeding as a student and a programmer, especially since the number of future employers that will accept “The teacher sucked!” as an excuse for a failing grade must be pretty small.

Even if you’re taking the so-called “structured” or “formal” education path, no one will hold your hand. You need to look out for yourself, and find opportunities to broaden your knowledge. I learned firsthand that often these opportunities will not be supplied to you, or even pointed out. You need to be responsible for your own education, especially at large universities. Self-directed study and college are not mutually exclusive.

In that spirit, in addition to my CS degree, I also took a Concentration in Game Culture and Design. This was an interdisciplinary program in conjunction with the art school which, did add a nice game “focus” to my studies. I think these types of programs can be helpful, though to say this improved my coding skills would be more than a stretch. Mostly, it gave me a bit more insight into the game pipeline, and the scale of the work that goes into making a game. I’ve gained some skills which I otherwise wouldn’t have been exposed to; for instance, I’m now comfortable finding my way around game design docs and I’ve had practice giving pitches.

While not a value traditionally espoused as part of a CS education, some gaming or art courses can help your creativity. I can’t speak for the more technical games programs out there, I think there is merit in learning a bit about the industry even prior to leaving school.

This fresher opinion confirmed my belief that no school can be as rigorous as GOOD self training like I gave myself, and under no circumstances should you want until you’re 18 (unless you already are!).

The basic message: Start as early as you can, preferable at age 8-12.

Given that college is roughly age 18-22, and adds a lot of value an education begun at home, it can actually dovetail perfectly with said self education. This will be the topic of a later post in this series.

CONTINUED HERE with The Method!

_

Parts of this series are: [WhyThe SpecsGetting Started, School, Method]

Subscribe to the blog (on the right), or follow me at:

Andy:  or blog

Read more from guest blogger Lauren here.

Or more posts on video gaming here.

And what I’m up to now here.

So you want to be a video game programmer? – part 3 – Getting Started

…CONTINUED from PART 2. Or start at Part 1.

Some kid is always asking me, “I love video games, how do I learn to program them?”

First of all, a warning. Reaching the skill level to be a professional video games programmer takes years. There are no shortcuts. You can not possibly go from nothing to professional grade skills in less than perhaps 2-3 years — and for that you’d have to be an uber-genius — usually it takes 5-10.

The good news is that you can start very young (8-10 — I started at 10) and you can do it on your own with common equipment and readily available information.

There are two basic approaches: home training and school. And while I personally recommend both, I’m going to use this post to give my own “origin story.” In followups we can apply these lessons to the present (programming itself hasn’t changed all that much in 30 years — there are just more libraries).

[ BTW, if you’re new to the blog and wondering who the hell I am in this context, click ]

Rewind to 1979. Some of my favorite things in the world were Dungeons and Dragons and arcade games. I was really too young to actually play D&D accurately, but I loved reading the books and modules (besides my regular diet of fantasy novels). I went to the Apple Store (not actually owned by Apple or nearly as glamourous as they are today — in fact, the owner resembled Gandalf) and saw the game Akalabeth running on an Apple II (not a + or an e, but an old school II). Boy did that set me to dreaming!

Then in 1980 my science teacher brought into class a Heathkit H8 her husband built (yes built). This early computer ran a lousy version of BASIC and possesed the world’s worst storage device: the audio tape drive. Actually punch cards were worse, but with the tape drive, saving your program bordered on impossible (at least for the sharing audio tapes with the rest of the class) and so you had to type it in repeatedly. We were given a single mimeographed sheet of paper with the BASIC commands. I read this a couple times and then wrote out longhand the first draft of a text-based RPG where you wandered around and fought orcs and trolls for gold and tretchure (this is how I spelled treasure at 10). During lunch I typed in and debugged the game, editing my paper copy as needed. I used my friends as beta-testers. It may seem overly ambitious to try and recreate D&D as one’s first program, but it illustrates the programmer principle of: program what you love.

Then my best friend got himself a brand new Apple II+ (just released). This was a slick update of the Apple II. It had a whole 48k, came with BASIC, and was often (but not always) accompanied by a 143k floppy disk drive! Low low price of $900 just for the floppy drive! In any case, the II+ was so much more awesome than the Heathkit. It even had graphics!

So I began pestering my father for an Apple. This took 9-10 months of continuous harassment — the machine was expensive — and all sorts of creative techniques to convince him. I offered to mow the lawn for free. I explained how various accounting software would make balancing his checkbook a breeze, etc. Once I was victorious (Jan 1981) we got the accounting software, but he never used it, leaving me to my own devices on the machine. And I think I kept getting paid for the lawn. Still, this episode illustrates another important programmer principle: persistence.

After the Apple arrived, I spent nearly all of my free time (perhaps 6-8 hours a day) on the thing for years. This is essential. You must offer up blood onto the alter of the programming gods. Principle: sacrifice. I used this time in many ways. I played a lot of games. I used every piece of software. I taught myself to program. I hacked. Principle: market research. But I couldn’t afford as many games as I wanted and in those early years the available library was small, so I was always trying to make my own.

I wrote totally lame versions of nearly every arcade game ever made. In BASIC at first (we’ll get to the issue of environment later). I would generally spend a day or three banging these out until they were marginally playable and then move on to new projects. Lesson here: practice. I chose more and more ambitious games and would use each one to teach me something new. I did this in incremental steps, mostly 1-2 day projects. By way of example, I might upgrade something or I might add a load/save system (requiring learning about I/O). My early games didn’t have much in the way of collision, later ones did. I started with text, then moved up to lores graphics, then highres, then shape tables, then bits of assembly language subroutines for blitting. Principle here: baby steps.

Baby steps are incredibly important. You can’t learn everything there is to know in computers in one shot. Each little area takes multiple projects and days — at least — to learn and master. Take file I/O. I’m sure I got something up and going the first day or two back in the early 80s when I decided to add a load/save system, but I was still learning about file I/O 25 years later on Jak 3 (of course then I was inventing new ways of doing stream I/O, but it was learning nonetheless). Your first pass might work, but often you barely understand any of the principles involved.

You have to start simple, build up blocks, and go from there. That’s why interpreted languages and text programs are a good way to begin. You need to learn about variables, scope, and flow of control before you can jump into 3D graphics. And forget about complex unforgiving environments like C/C++ or assembly to begin with. Those come later and are just one more thing to spend a series of baby steps on. Just learning about makefiles or projects and compile options could stop a novice dead. So don’t — yet. Each task (and thing to learn) should be broken into some chunk that only takes a couple days at most to digest — or at least make some headway on. This leads to a virtuous feedback loop of progress and learning.

I kept writing those lame little games for about 3 years (100s of them). Of all my friends with computers (we all programmed in that era because computers didn’t do much if you didn’t program) my games were the coolest. I used them to invent all sorts of excuses to develop new skills. I wanted to learn about interpreters so I made an engine to allow the creation of text adventure games using a custom scripting language. Once I got this going I upgraded it to graphic adventures, which proved to be a perfect excuse to implement an idea I had seen in the Sierra games where line drawing and fill commands were used to compress images to a fraction of their raw size. On the Apple II a raw graphics screen was 8k. So a floppy only fit 17. A normal compressor (ancestor of zip) might squeeze this to 3-4k but that is still only 30-35 images. This “save the drawing commands” style made them a fraction of that. But for it to work I not only had to create the “renderer” (including an assembly fill routine) but I also a whole “paint program” to allow the recording/creation of these proprietary images.

However, each of these sub-steps resulted in satisfying progress on its own. Principle: chunking. For example that fill routine. It took several days, and my mastery of recursion in assembly wasn’t the best so it left little corners unfilled, but it was cool in of itself. My first fill routine (in BASIC) took 5 minutes to do a fill, and the assembly one only a second or two. Plus, I was to keep using it in all sorts of programs for years (with improvements). Principle: reuse. Building on the tools you make is essential to programming.

In 1982, I met Jason Rubin. He also programmed. He was an amazing (by the standards of the time and our age) artist and his games LOOKED REALLY COOL. But they crashed a lot. Mine rarely did. From the beginning I hated crashing. Still can’t tolerate it. I have trouble leaving the keyboard if a crash bug is still outstanding. Principle: perfectionism. My programs also did much cooler “programming” stuff. They just didn’t look cool. When we combined our talents, things really took off! Our games now looked cool AND ran decently. Impressive stuff. Lesson: partnership. Not everyone can be good at every aspect of computers. Nor even of programming itself.

CONTINUED with Part 4 – School!

_

Parts of this series are: [WhyThe Specs, Getting Started, School, Method]

Subscribe to the blog (on the right), or follow me at:

Andy:  or blog

Or more posts on video gaming here.

And what I’m up to now here.

So you want to be a video game programmer? – part 1 – Why

This post is a sequel of sorts to my How do I get a job designing video games. The good new is — if you’re a programmer — that nearly all video game companies are hiring programmers at all times. Demand is never satisfied. And the salaries are very very competitive.

The bad news is that it takes a hell of a lot of work to both be and become a great game programmer. Or maybe that isn’t such bad news, because you absolutely love programming, computers, and video games, right? If not, stop and do not goto 20.

I’m breaking this topic into a number of sub-posts. Although this is the intro, it was posted a day after the second, number 2, on types of game programmers, but I’m backing up and inserting this new number 1 (I’m a programmer, I know how to insert). Other posts will follow on topics like “how to get started” and “the interview.”

_

So why would you want to be a video game programmer?

Let’s start with why you might want to be a programmer:

1. Sorcery. First and foremost, being a programmer is like being a wizard. I always wanted to be a wizard. Given that magic (as in the D&D variety) doesn’t seem to be real (damn!) programming is the next best thing. Computers are everywhere. They’re big, complex, and all sorts of cool everyday devices (like iPhones, set-top boxes, cars, and microwaves) are really basically computers — or at least the brains of them are. 99.9% of people have no idea how this technology works. As the late great Author C. Clarke said, “any sufficiently advanced technology is indistinguishable from magic.” Yay computers! If you actually know the arcane rituals, incantations, and spells to controls these dark powers then you are… drum roll please… a wizard.

2. Career security. Computers are the foundation of the 21st century economy. Nearly every new business is based on them. Knowing the above incantations is secret sauce. All the growth is in high tech (product possibility frontier and all that). Hiring is supply and demand too. The demand is for programmers and other high tech specialists.

3. Even more career security. Programming is hard. It requires a big New Cortex style brain. This means lots of people can’t do it. It takes years of study and practice. I’ve been programming for 30 years and there is still an infinite amount for me to learn. Awesome!

4. It’s a rush. Creating stuff is a rush. Making the infernal machine bend to your warlocky will is a huge thrill. It never gets boring and there is always more to learn (related to #3).

5. It pays really well. This is related to #2 and #3. People need programmers and they can’t get enough, so they have to pay competitively for them. Even in the late 90s early 00s at Naughty Dog it was very rare for us to start ANY programmer at less than $100,000, even ones right out of school. Good ones made a lot more. And if you’re a total kick-ass grand master wizard (nerd) like Bill Gates or Mark Zuckerberg you can even start your own company and make billions. Take that you muscle bound warriors!

6. Solo contributions. You like spending time with machines and find all day dealing with illogical humans at least partially tedious. Sorry to say it, but even though most professional programming is done in teams a lot of time is spent at the keyboard. For some of us, this ain’t a bad thing.

7. Socialization. You need an excuse to hang out with others. On the flip side, because of this team thing you’ll be forced to socialize on and off between coding. This socialization will have certain structural support. This is convenient for the would-be wizard, master of demons but terrifying forces, but afraid of starting conversations.

So why would you want to be a video game programmer specifically?

8. Video game programming is really hard. Probably the hardest of the hard. It combines cutting edge graphics, effects, the latest hardware, artistic constraints, tons of competition, very little memory, and all sorts of difficult goodies. The really serious wizards apply here.

9. Other types. Video game teams have artists, musicians, and designers on them too. Lots of tech jobs don’t (although they sometimes have those pesky marking folks). Artists etc are cool. They know how to draw or compose cool stuff which makes your code look and sound much cooler.

10. Consumer driven. If you make it to work on a professional game they often sell lots of copies and people will have heard of what you do. This is much much cooler than saying “I worked on the backend payment scheme of the Bank of America ATM.” It’s so cool that it might even get you laid — which is an important concern for bookish wizards of both genders.

11. It’s visual. Seeing your creations move about the screen and spatter into bloody bits is way more exciting than that green text on the bank ATM. Talented artists and sound designers will come to you with said bloody bits and all sorts of squishy sounds which will make your coding look 1000x more cool than it would by itself. If you aren’t into bloody bits than you can work on a game where enemies explode into little cartoon rings. It’s all cool.

12. It’s creative. For me, I have to create worlds and characters. I’ve been doing so my whole life. Right now I’m not even programming but I’m writing novels, which is also about creating. Programming in general is pretty creative, but game programming is probably the most so.

13. Love. You love video games so much that working on them 100+ hours a week seems like far less of a chore than any other job you can think of!

I’m sure there are more reasons, but the above seem pretty damn compelling.

CONTINUED HERE with Part 2: “The Specs”

_

Parts of this series are: [Why, The Specs, Getting Started, School, Method]

Subscribe to the blog (on the right), or follow me at:

Andy:  or blog

Or more posts on video gaming here.

And what I’m up to now here.

So you want to be a video game programmer? – part 2 – Specs

…CONTINUED FROM PART 1.

There are a couple of broad categories of programmers working on video game teams. If programmer is your player class, then the following types are your spec. Programmers are all warlocks and mages so instead of “demonology” or “frost” you can choose from below. (NOTE: if you don’t get this joke, you don’t play enough video games) This is the real world however, and many programmers dual (or even triple) spec — i.e. they handle multiple specialties.

1. Gameplay programmer. Programs enemies, characters, interfaces, gameplay setups etc. Probably also does things like AI and collision detection. These programmers are sometimes a little less hardcore technical than some of the other types, but this is the sub-field where the most “art” and experience are often required. Learning how to make a character’s control feel good is not something you can read about in Knuth. It takes the right kind of creative personality and a lot of trial and error. In a lot of ways, this is the heart and soul of game programming, the spec that truly differentiates us from the more engineering programming disciplines.

2. Tools programmer. Works on the extensive tools pipeline that all games have. This is the only branch of game programming where you don’t absolutely have to know and breathe video games inside and out, and it’s a little closer to mainstream applications programming. That being said, life at most video game companies is so intense, you better love them. Tools programmers tend to be very good at practical algorithms, data processing, etc. For some reason, perhaps because it’s more “behind the scenes” this spec is often viewed as less glamourous and there are fewer programmers who want to go into it.

3. Sound programmer. A very specific niche. Here you have to not only know how to program well, but you have to care about the esoteric field of sound. You need the kind of ear that can tell if there is a one sample glitch in some audio loop, and you need to care if the 3D audio spatialization is off or the sound field isn’t balanced. This is often a fairly low level area as audio programming is often done on DSPs.

4. Collision programmer. This is a really specific spec, and often overlaps with Graphics because it involves totally intense amounts of math. You better have taken BC calculus in tenth grade and thought “diffy-q” was the coolest class ever if you want to go into this.

5. Network programmer. In this era of multiplayer and networked gaming there’s a lot of networking going on. And programming across the internet is a bit of a specialty of it’s own. In general, video game programming takes any sub-field of programming to it’s most extreme, pushing the bleeding limits, and networking is no exception. Games often use hairy UDP and peer-to-peer custom protocols where every last bit counts and the slightest packet loss can make for a terrible game experience. If this is your thing, you better know every last nuance of the TCP/IP protocol and be able to read raw packet dumps.

6. Graphics programmer. Some guys really dig graphics and are phenomenal at math. If you don’t shit 4×4 matrices and talk to your mom about shaders, don’t bother. This sub-specialty is often very low-level as graphics programming often involves a lot of optimization. It may involve coming up with a cool new way of environment mapping, some method of packing more vertices through the pipeline, or better smoothing of the quaternions in the character joints (HINT: involves imaginary math — and if you don’t know that that means the square-root of -1 then this sub-field might not be for you).

7. Engine programmer. For some reason, most wannabe video game programmers hold this up as their goal. They want to have created the latest and greatest video game engine with the coolest graphics. Superstars like Tim Sweeney,John Carmack, and even myself are usually seen as falling in this category. The truth is that superstars do all kinds of programming, and are often distinguished by the fact that we are willing and able to handle any sub-type and tie it all together (see lead below). In my mind engine programmers are jacks-of-all-trades, good at building systems and gluing them together. The top guys often blend with Graphics and Lead below. There’s also tons of stuff like compression (nothing uses compression like games, we’d often have 8-10 different custom compressors in a game), multi-threading, load systems (you think seamless loading like in Jak & Daxter is easy?), process management, etc.

8. Lead programmer. People also dream of being the lead. All the great programmers are/were. This is the hardest spec, and no one ever starts out in it. You need to be able to do any of the other specs, or at least judge what approach is best. You need to be able to roll up your sleeves and dive in and fix crap anywhere in the program. You need to live without sleep (4 hours a night every day for years baby!). You need to be able to squint at the screen and guess where the bug is in others people’s code. You need to know how to glue systems together. You need to be able and willing to trim memory footprints and optimize (no one else wants to do it). In fact, you have to know the entire program, even if it is 5-10 million lines of code, and you have to do all the crap that no one else wants to do. Plus, you often have to manage a bevy of other personalities and waste lots and lots of time in meetings. Still want the glory? Being lead is all about responsibility!

CONTINUED with Part 3: Getting Started

_

Parts of this series are: [Why, The Specs, Getting Started, School, Method]

Subscribe to the blog (on the right), or follow me at:

Andy:  or blog

Or more posts on video gaming here.

And what I’m up to now here.

Eyeborg – Resistance is Futile

As my friends who’ve known me since the 80s will recount, I’ve always been an enthusiastic advocate of upgrading the human race. In fact, before going to M.I.T. to start my PhD (aborted after two years to make Crash Bandicoot), I applied to all sorts of MD/PhD programs in biomedical engineering. I chose M.I.T. (AI Lab) over Johns Hopkins (Bio med eng) partially because I always had more fun with computers than in bio lab (despite having majored neuro-bio) but also because my enthusiasm for “improving” mankind with technology seemed to fall on deaf ears in the medical community. Somehow it’s perfectly alright to talk about giving sight to the blind — which, by the way, I’m all for — but uncool and oh so Dr Neo Cortex to discuss bionic eye upgrades.

In any case, check this guy out!

He’s replaced his eye (albeit already missing) with a camera and transmitter! For real!

Too bad he hasn’t yet overcome the really big hurdle: sending the signal to his brain! That’ll be a while, splicing any kind of video signal into an optic nerve or V1 (the early visual cortex) is, as we used to say at M.I.T.: non-trivial!

Get too it Eyeborg!

More information can be found at the Eyeborg Project’s home page.