For a few months, I’ve been trying to wrap my head around overarching trends in video game development, and fit them together with what’s going on right now; perhaps to provide a bit of insight into.. well.. into whatever’s about to happen next.
I want to start with some big-picture background on the industry. Many (most?) of you probably already know all this stuff. But I want to write it out anyway, just to make sure I have it straight in my own head. Please do feel free to point out anywhere that I’ve messed up. :)
Video games, as an industry, started up somewhere around the year 1970. Depending on exactly which event you count as the “start”, you might push that date a few years in either direction, but that’s approximately when people started to consider that one might be able to earn a living by making video games.
At first, this was mostly done via coin-operated games; a customer pays a very small amount of money to play a game once. If he wants to play again, he needs to pay that very small amount of money again. This was great; people didn’t have to spend much to play, and if you made your game short-ish (most games aimed for 90 seconds of gameplay), then you could still take in a reasonable amount of money if the game was popular. The downside, of course, was that the owner of the arcade cabinet had to keep the cabinet in good repair, make sure the controls worked, etc. But the industry was reasonably healthy like this (though small) for a reasonably long time.
In the late 1970s, the industry crashed. Wikipedia blames a glut of Pong clones, and a lack of compelling new game ideas. They might be right about that, for all I know. The important thing is that within a year, it recovered again, when Space Invaders took the world by storm.
But as computers became cheaper and more readily available in the home over the years, game makers started selling games directly to customers; get people to pay a moderate amount of money, and let them play the game as much as they want. This avoided the maintenance costs, and allowed software to be sold for a higher price point. Additionally, since your customers weren’t competing with each other for time at the screen, you could provide a longer game experience (in fact, you almost had to, in order to justify that higher price you were asking!)
Let’s zoom forward a bit to the early 1980s. The Atari 2600 was in its heyday. Here’s the neat thing about the Atari 2600 which you need to understand: It was basically an open platform. Its processor specs were all known, it used simple connectors, and its cartridges contained simple ROM chips (typically about 2kb in size) containing the programs for the game console to run. If you knew the specs and had the necessary chip-burning hardware, you could make games for the Atari 2600 and sell them. This eventually led to a huge explosion of games, most of which were of rather low quality. Of course, Atari tried to stop companies from making these, but it never had much luck, and the game proliferation continued at a rapidly accelerating pace.
Now, even today, with high-speed communications technology, it’s almost impossible to keep up with independent games. So try to imagine what it was like when the best sources of gaming news were magazines which had limited space for reviews and took two to six months to talk about recent releases — there was simply no way for people to find out what games were good or bad, and as more and more games were released, the game makers started becoming more and more desperate to get attention from customers; dropping prices to absurd levels, packing them in with other products, etc. People often point to the Atari 2600 ET game as having caused the 1983-1984 video game crash, but in truth, the writing had already been on the wall for a long time; ET was merely the final straw. (homework: consider the Atari 2600 situation in 1983, and compare it against the situation in iOS/Android markets today. How are the situations alike? How are they different? I’ll be bringing these questions up again tomorrow.)
So the market crashed. Consumers couldn’t find good games to buy, and even when they could, the game makers couldn’t make a living by making the games, because they needed to price the games absurdly low in order to compete.
And then a little toy-making company found a way to bring the game industry back to life. That company was Nintendo, and their brilliant idea was to keep their NES console proprietary. To write software for it, you needed their documentation, their hardware, and only Nintendo could manufacture your game cartridge, as their proprietary authentication hardware would only recognise and play only games which were manufactured by Nintendo. In that way, Nintendo could control the supply of games; it would be impossible for a glut of terrible games to happen again, and so game prices wouldn’t tumble to the point where game developers couldn’t afford to continue developing games. And of course, Nintendo took a percentage of every sale.
It was an effective system for a year or two, but the desire to release their own games on the system, without going through Nintendo, was simply too great, and many companies reverse-engineered the authentication technology, or found other inventive ways for their own games to pass Nintendo’s authentication schemes. After Tengen produced an unlicensed version of Tetris competing with Nintendo’s own, Nintendo had finally had enough.
Nintendo had had virtually no luck at stopping unauthorised games from appearing on the NES; courts just didn’t see how any law was broken by people releasing unauthorised games. But in 1989, Nintendo had a brilliant idea. This new idea was simple and elegant, and brought the full force of law behind Nintendo’s effort to lock down their consoles. Nintendo was releasing the Game Boy, and they knew how they were going to keep a tight grip on it, as they’d failed to do with the NES.
Here’s what they did: Each Game Boy game would, upon boot, display the “Game Boy” logo.
That “Game Boy” logo was the core of the new authentication scheme; if the game ROM displayed it correctly, then the game would boot. If not, then it wouldn’t. This may seem trivial, but the important thing is that now in order to make their unauthorised game boot, competitors would have to encode into their games an image file to which Nintendo owned the copyright, and of a logo to which Nintendo owned a trademark. Nintendo may not be able to successfully sue a competitor for releasing a game on the Nintendo console without permission, but they could absolutely sue for using their copyrighted image and trademark. The software engineer in me wants to call this an extremely clever hack of the legal system; finding a way to make an existing system do what you want it to do (“stop my competitors from releasing software for my hardware”), even though it wasn’t how the system was initially intended to work.
We’ve more or less been in that setup ever since, in all the game consoles which have followed; using copyright and trademarks to maintain absolute power over what software can be released on their hardware, all in the name of avoiding another glut of low-quality video games, potentially leading to another video game crash like the one we saw in 1983-4.
Things haven’t changed much in the 25 years since then. Hardware has gotten faster, graphics have gotten prettier, but console owners are still protecting their consoles in approximately the same way. We can argue about whether or not they’re concerned over the glut of low-quality titles that killed Atari, or whether they’re concerned about the type of direct competition that so annoyed Nintendo, or whether they just like receiving a fraction of all the profits of everyone else’s development efforts, but the only real salient factor is that the companies which make the consoles still currently have a stranglehold on who can develop for those consoles, how their games can be distributed, and (to a varying degree) what prices can be charged.
Next time, game length vs. game price, a historical perspective.