I am looking into how feedback and quality control happened in our beloved games in the 80s and 90s and came across something I learned that I want to share.
As I research the subject space, it becomes apparent that especially in the 80s, structured testing and feedback gathering wasn’t necessarily a very structured task that developers or studios took. Clearly some testing was done, but mostly within the core team, or with nearby friends, or there was this external person you hired for testing – but if anything, you’d mostly look for breaking bugs that would cause the game to crash or render dysfunctional.
Let’s focus on Leisure Suit Larry for a bit: Leisure Suit Larry in the Land of the Lounge Lizards – Wikipedia, Leisure Suit Larry in the Land of the Lounge Lizards – Amiga Game – Download ADF – Lemon Amiga
This is Al Lowe’s first adventure game that’s not made for children – at the time he worked at Sierra Online. We’re late in 1986, and Al is tasked to remake an already existing adult adventure and give it music, updated graphics, everything.
You’re acting as the main character, Larry, and you’re navigating with mouse clicks through various scenes, while actions in the game world, and interactions with other characters are handled through text commands that you type in. A text parser would make sense of your command and give you a response. So you would walk to a door with that character and type “knock on door” and the game would understand you (most of the time).
Text adventures weren’t new, and Sierra has a long, successful streak of building adventure games that supported player’s text inputs. Since this was the first time Al was in charge of an adventure game, he wanted to make sure the parser system (AGI, Adventure Game Interpreter from Sierra On-Line) was working as expected. He also wanted to reduce the “you can’t do that here” default response of the game
He recognized that, for the game to be successful, the game needs to understand player’s input correctly – and not frustrate the player losing time and energy in finding the right way of describing what they want to do, just because the engine does not comprehend what’s written. In order to get feedback on the game and how the parser worked, Al convinced Sierra to do a beta test with a few external test players. Remember this is 1986 and getting hold of reliable test players without too much overhead wasn’t easy. What Al did, was reach out to users in the Compuserve Gamers Forum and ask them to write a short essay of why they’d like to test a new adventure game and send it in. The best responses won.
For those that don’t know what Compuserve is: it’s one of the first “Forum”-type services that existed in the late 70s and 80s. You’d dial into a special server via your telephone line and your 300 baud modem (then-price ~150-300 USD), used Compuserve’s connection software that provided a UI for it, and browsed these bulletin boards/forums, to chat and write messages with likeminded people all around the world. The software could also be used for what was e-mail back then, file exchange and live chats on subjects as well.
Al did two things here: he tapped into a focus user group for Games, and he fished for interested and committed users. He gave them a small challenge to probe their commitment as well, to fend off the free loaders that were just there for a free game, by asking them to write an essay of some 100 words. They picked the 12 users with the funniest essays – and sent them copies of a slightly modified early version of the game.
Slightly modified – because Al wrote extra code that would record all the occasions the system would respond with “you can’t do that here”, record player input, room/location and game state data back onto the game floppy disk. He then had the beta testers, after some time, send back the floppy to them. This is an early version of what we do today with today’s software as well, to help with defects and bugs, enable troubleshooting, do capacity planning, perform UX A/B testing and all kinds of things. It’s the mid-80s version of collecting telemetry of when things go wrong, that help you troubleshoot and learn to improve.
What Al did then, subsequently, was analyze the collected data, and added hundreds of extra responses and adjustments to the game. He brought the number of occasions a user would see the dreaded “can’t do here” message down, reducing user frustration – by simply either making the parser accept more variations of phrasing what was correct in the first place – supplied by the users – and add in funny responses to things users tried, that were wrong, but deserved a different answer. This was also another opportunity to shape the tone of the overall game – which does have a humorous tone and makes fun of the main character, Larry.
Al describes this in interviews, here: Leisure Suit Larry in the Land of the Lounge Lizards trivia – MobyGames, Al Lowe Reflects On Leisure Suit Larry | by James Burns | SUPERJUMP | Medium
To summarize:
- Al had a need for specific user feedback – he acknowledge the demand and worked out what specific feedback he wanted
- He found a way to reach out to the respective users
- He built the tooling to collect the feedback and data from players – and also the tooling to analyze the feedback efficiently.
- He sent the special-purpose telemetry-collection version of the game the test users – and made sure only they get it (as opposed to upload it and run the higher risk of it being distributed)
Quite impressive!