I love storytelling. I feel that other people should care about storytelling, too, because it is so very important to us: it is a fundamental substrate of human existence. A lesson I learned from Neil Gaiman’s The Sandman is that a story is a metaphor for life and life is a metaphor for a story. All our forms of entertainment, including the modern invention of videogames, are culture, and “culture” is just shorthand for (among other things) “who we are, what we do, and what we enjoy.” Stepping sideways into music, there are always the words of Amanda Palmer in her Ukulele Anthem:

You may think my approach is simple-minded and naïve
Like if you want to change the world then why not quit and feed the hungry
But people for millennia have needed music to survive
And that is why I promised John [Lennon] that I will not feel guilty

This sort of reasoning contributes to my general enjoyment of all sorts of games regardless of format. (Though I am aware that each medium has its own qualities that may be employed to good result in the artistic creation. But such brings its own discussion.) For years, when broaching this topic with people I would refer them to my videogame collection: most everything I owned was there because of the storytelling. This included an oddity or two . . . which will come in two paragraphs as the main reason I am posting today.

But before I get there, first let me note the timeframe of this collection. Around the turn of the century/millenium, videogame technology had advanced to the point where real storytelling was possible. I took particular note of Looking Glass Studios’ Thief (fan site here). Years prior, in seeing early first-person games and all their straightforward violence (see Wolfenstein 3D and Doom), I’d imagined the development of a game where enemies had personality, a real life. Your actions might be violent in the end, such as assassination, but this hypothetical game would have computer-controlled characters do such things as sleep, talk, and get angry. They’d HAVE background, instead of BEING background. Then Thief came along and did exactly so (minus much assassination).

The feeling that the world is not “just background,” but that it is alive and filled with living, breathing people, is what many gamers such as renowned author Sir Terry Pratchett enjoyed in the Thief series. I agreed. Thus it was that I bought Unreal Tournament for the story.

It sounds impossible. This is the first-person shooter that made “frag” into a (gaming-) household word. And to this day I have never met another human being who realized that Unreal Tournament HAS a story at all. But I did. Why? And how? Simple: I was the only human being I knew who bothered to read the character backgrounds presented before each match. Thus I saw that the world of Unreal Tournament is one filled with living, breathing people; one where the enemies have personality, a real life. There is even a little mystery about who and what the final enemy of the game is supposed to be. I liked this, and I felt that the background enriched my experience as I played through the high-quality first-person frag fest.

Ken Levine, during the development of the game Bioshock (which is very violent but also has extensive story), discussed how the goal was to ensure the game worked on three levels. On one, the story could be ignored beyond “okay, so, that’s the boss” and it would be a good action game for people who wanted it. On another, the story would be integrated well enough that gamers could observe “oh, I see what motivates these people” intermixed with the gameplay. Then on yet another, of course, the story would be there for people to devour in its entirety, pouring over each log and line to understand the world.

The fact that a batch of “mindless enemies” can be so interesting leads me to now, where I’ve decided to run with this and develop a story world (the same thing as a game world) based around fighter background information. Part brainstorming, part game design, and part just having fun as always. And I will do it in my next post.

Advertisements

Variety is – Part I

July 27, 2014

Let us take a moment to appreciate that the URL for this post shall forever be “Variety is Part I,” no dividing punctuation.

Variety is more than “the very spice of life” (per William Cowper). Variety is the fundamental substrate of human experience. And, from there, it should come as no surprise that variety in art style or movie visuals or videogame content is important for “spice.”

So let me give you a “generalization alert” here: I’m about to draw parallels between things that people already know. In this case, I’m comparing storytelling and art to the basic human experience. Can you handle such mind-boggling generalizations?

Consider vision science, i.e., sensation and perception, i.e., that part of psychology concerned with how your visual system works (among other senses, depending on focus). It’s not enough to ask “How do we see things?” because the very question makes an assumption: that “things” are what we see. It’s more accurate to say we derive the existence of “things” after more basic calculations. At the most basic level . . . we are change detectors.

Change is information. Turn your screen black for a moment and look at your blurry reflection. If you needed to summarize what you saw, how would you do it? State “There’s an inch of horizontal forehead, then two more inches, then another two-and-a-half”? No, that’s a waste of breath. More effective is to note “Here’s a line; on one side of the line it’s my skin color, while on the other side it’s my hair color,” and suddenly both hair and forehead are understood. Pick another line and you get the edge of an eye, for instance.

It is these edges, these changes from one state to another, that define what we see. Conveniently, basic eye anatomy is designed to detect edges. Look it up online or take my sensation and perception class if you need more explanation: it’s a fact of the eye that we seek and emphasize change. Not just change across space but also change across time, as, of course, the motion of an object is also part of perceiving the “thing.”

Now consider the people who take advantage of the powers of vision: artists. I enjoy reading webcomics, and as I’ve taken my daily fill I’ve heard artists using the phrase “visual interest.” For all I know, it’s official art terminology as taught in schools — for all I know, it’s an arbitrary yet convenient phrase picked out of linguistics.

What does it mean? I don’t know, but it’s admired in places like Calvin & Hobbes: the comics are “interesting” because the characters don’t just stand around and talk. Not only are Calvin and Hobbes off dashing down a hill when it’s relevant to the story, but they’re walking along logs and clambering over rocks when it has nothing to do with the matter. Something happens: poses vary, camera angles vary, scenes vary, everything varies; especially in this comic, famous for varying basic panel format not to mention content.

Thus does the webcomic artist, say, speak of how a character design could use another detail here or there for visual interest, but people in many domains use this same idea. I remember stepping into a college dorm and hearing the phrase “you need to put posters on the wall or something” over and over again. The posters themselves didn’t have to be good, and, in some rooms, wow were they not; but people seemed to expect something in the blank space, here and later in life. Your porch looks better with a potted plant; the walls look better with a painting; the floor looks better with a rug; something to “break up” the flat expanse. To form an edge and make a change, else it’s all the same everywhere and therefore, by definition, nondescript.

It seems variety is just to be expected in art as in life. What about in videogames? Next time I will discuss it in game design using a few examples — with level of interest to be varied.

Culture I say

April 17, 2014

Let’s talk culture.

In college I developed a definition of “culture.” My journey toward it came from odd places: like teachers telling us students that we had to attend “cultural events.” Attendance even had grades attached to it. Here, look at this syllabus for an orientation class at the University of Maine:

“Each student is required to attend two cultural events . . . . Cultural Events may include, entertainment events, lunchtime lecture series, Art exhibition etc. Only one athletic event can be used. Turn in ticket stubs with your name on the back or a short description of the event and your personal reactions on a separate sheet of paper” [sic]

Whoa, really? Why? What’s so valuable about “cultural events” that you can justify requiring students to attend?

Well here, maybe an explanation can be found in this program description from the University of New Hampshire:

“In order to expose students to the broader constructs that frame our societal environment, as well as enhance their worldview and facilitate the acquisition of a global perspective, the McNair Program will provide access to cultural events for participants to attend. These events will include the fine arts, activities of ethnic diversity, and community/geographical events unfamiliar to McNair participants. During the academic year, participation in at least one (1) cultural event is required of all McNair students. During the summer component, all cultural events on the summer calendar are required.”

Oh, now that’s interesting. “Culture” is about “the broader constructs that frame our societal environment.” And yet we’re still talking about (per UM) “entertainment events” and “art exhibitions.” Yes, UNH also gave the example of “activities of ethnic diversity,” but pray tell: what are those? Demonstration of ethnic dance, perhaps? Workshops in making arts and crafts? All the things that make people happy or make their world more livable.

Culture is entertainment. Perhaps entertainment and art, if you feel those are separate categories.

Or at least, culture is entertainment when we speak of “being cultured.” Ask yourself: what is a cultured person? Images come to mind of an upper-class individual quoting Shakespeare. Which, come to think of it, is exactly in line with these college links I provided: once upon a time, universities existed to create “gentlemen,” the properly-cultured individuals of classical education.

But we need not look to upper-class snobs to quote Shakespeare. As you know, the average person is capable of saying that “all the world’s a stage” or complaining “lord, what fools these mortals be.” Culture, it seems, is nothing but a shared geekdom. It is the idea that you have experienced some entertainment (or art) and so have I. It is the assurance that if you ask “wherefore art thou Romeo?” then the people across the way know you’re not calling them Romeo; you’re quoting Shakespeare.

This means that videogames are culture.

Absolutely no way around this. Some people ask “Can videogames be art?” Less-presumptuous people ask “Are videogames art?” because the first question assumes it currently is not. But no one, NO ONE, questions whether videogames are entertainment.

Last time, I wrote about Jonathan Blow’s speech titled “Design Reboot” from 2007 (with the lovely animation of choice quotes by Superbrothers). Here’s some more:

“Why do people play games? We already know one of the answers is pretty obvious.
“1. Games can provide entertainment/fantasy/escapism. . . . But if this is all that games were, I would be intensely dissatisfied. Because fantasy and escapism is not fulfilling to me. At the end of the day, I want to feel like my life has meaning.
“2. Meaningful artistic expression. Coming from a different angle than other media. . . . Music doesn’t feel like a movie or a poem. In fact, if you have a song that is sad and a poem that is sad, the sadness from the poem is going to feel fundamentally different than the sadness of the song.
“3. A means of exploring the universe. . . . Games are formal systems . . . and systems like that are biased toward producing truth (or at least consistency). . . . You can think about mathematics. You start with some axioms that are defined or assumed as true and then you have some rules that you can use to combine those axioms . . . until eventually you end up with something that makes a statement that must be true that you didn’t know when you started.”

Thus we have videogames. Valve’s popular Portal series is quoted by people who share this geekdom: “The cake is a lie.” “We do what we must because we can.” “For science. You monster.” How did we get to this point? Portal is a puzzle game that fully explores its mechanics, granting the player an interesting new “means of exploring the universe” (“thinking with portals”), and then going forward logically. The gameplay engages the audience, as does the humor in the unfolding story; and, as the story proceeds, it explores the humanity (and lack thereof) of the characters in the play. I mean the plot. Thus is it both “entertainment” and “meaningful artistic expression.”

It is culture. The “cultured gamer” has played Portal. Just as you can say the word “Tetris” (no link possible, for it is ubiquitous) and the people across the way know you’re not sneezing.

And now gaming culture has been around long enough that the earliest gamers, predominately starting from the 1980’s, are now the grown-ups raising children. Just search online and you’ll see bloggers asking when and how it’s okay to introduce children to their own personal geekery (in movies, comics, or games). The people raising the next generation, the people running and spending money on today’s businesses, are people who’ve played Tetris.

So today’s game developers are advised to remember their creation is not “just a game”: games expand our mind and our language, they “frame our societal environment,” and they’d jolly well better “enhance [our] worldview” in preference to shrinking it.

I’ve performed in Shakespeare, and I’ve played in Portal. The great playwrights of past centuries are all dead. Videogame developers aren’t. Are you prepared for history to hold you to the same standards?

The quest for content

March 4, 2014

Time to essay an essay.

Game developers want players to play their games. It only makes sense. On one level, a small independent developer might be happy to know that a million people played something. On another, an established company might want to make money off of a million players with monthly subscriptions.

Both of those are fine, but there’s the question of what comes next. If you need to keep a player base, either for interest or for money, what do you do when the players “finish your content”? Unlike with paying for food, players need not pay twice for the same sort of content: they already own your game. They’ve “consumed” it. And now they might just say “there, I’ve done everything” and uninstall it.

This sort of “quest for content” affords three approaches.

A: Offer players more content.

B: Slow down player consumption of the content.

C: Make “consuming the content” irrelevant.

I’ve come to understand that bad decisions at this stage can defeat the purpose of gaming. We have created a monster: a sort of “anti-gameplay” in modern entertainment. Let me show you how we get there.

Solution C is a healthy choice, but hard to define. Consider: does chess have content to consume? No. You can play it forever against different opponents and always be satisfied . . . assuming you ever enjoyed chess in the first place. Thus if an experience is inherently fun, however one defines “fun,” then none of this matters and players will keep coming back.

Often, though, people need something to consume in order to have fun. A new story to read or new world to explore. Solution A is an answer, but, traditionally, is expensive to implement: one can make a longer game. One can produce an expansion pack. One can develop a sequel. Some players will buy it, then they’ll consume it and you’ll have lost them again. Repeat.

Solution B is the cheapest and most failsafe way to solve the problem. One might think I’d support solution B and dislike solution A, since I’ve argued that more content is not necessarily good content. But in that essay I made a point on “filler,” on forcing the player to do “the same thing twenty times,” which I did not fully substantiate. Solution B is the “filler” solution. It brings the threat of bad game design, and is the reason I’m writing this essay.

Now to tell a story that any gamer already knows, but bear with me as I get to the conclusion.

I first became aware of solution B years ago when I learned about MMORPG’s. My roommate was hard at work advancing within, oh, one of those really popular games toward the start of the era. The game had released super-special “legendary items” that took a lot of effort to earn as the player had to collect intermediary items from different locations in the world.

So one day my roommate was camped out near a lake. To populate the lake with aquatic nasties, the developers set spawn points which periodically would replace critters the players killed. No problem so far; though of course it breaks immersion a little when the players locate these magical “instant monster” spots. Anyway, my roommate explained that this one item you needed was only available from an alternate version of the fishy foe, which only appeared on a tiny fraction of the respawns. How could you tell if the alternate version were in the water? Ah, well, because it’s the only one that would follow you out of the water and shatter all suspension of disbelief by swimming in midair. Then, on a tiny fraction of the times you faced this watery loot machine, it would drop an intermediary needed for your “legendary item.”

One of several intermediaries.

And therefore once you had it, it was time to do another tiny-fraction-of-a-tiny-fraction hunt in another part of the world with another monster.

Yes, by the time all this was done, it’d be long enough of a “quest” to make a “legend”! No argument there! But why was this happening? In an ordinary “quest,” you engage in things that are personally meaningful, meet and lose friends, face real risk and find remarkable rewards, and maybe, just maybe, change the world. Here, the only real event was gaining the “legendary item” at the end. Meanwhile? The developers slowed down the player, dragged out the content, kept people paying to play, and implemented solution B.

I’m far from the first person to talk about this. Jonathan Blow, well-known developer of Braid and thinker of gaming thoughts, presented a speech titled “Design Reboot” in 2007. It started with sentiments like mine about “game developers want players to play their games” and followed it through to logical conclusions. It left such an impression that Superbrothers (the brilliant minds that were to bring us Sword & Sworcery EP) made an animation you want to watch of the kicker of the argument. To whit:

“It doesn’t really matter if you’re smart or are adept at trying to get ahead in a system because what really matters is how much time you sink in, because of all the artificial constraints on you. That also says that you don’t really need to do anything exceptional because to feel good, to be rewarded, all you need to do is run the treadmill like everyone else.”

As I said, though, any gamer already knows about filler. My point is here: consider how removed we are from any gameplay.

Pretend I’m the game developer, dressed in the regal robes and crown of a mighty quest-giver. You are my old roommate. With all gravity, I set to you the challenge to retrieve the 7 Whatsits so you may earn your Legendary Polygonal Reward. Where are the 7 Whatsits and how do you retrieve them? Ah, therein lies your challenge! So now you go forth to accomplish your challenge by downloading a Whatsit location guide that somebody posted in an online text file.

Wait a minute.

You are doing nothing to find the 7 Whatsits. In fact, you can’t: how were you to know that a tiny fraction of all fish monsters spawned in one lake in the entire multiverse might give you one of your Whatsits? You’d have to spend days per monster in the whole game just to test each one. You can’t do that, so you don’t.

So you, my roommate, depend on someone else to have found the solution and have posted it online. You don’t even need to think: somebody else already did the thinking for you. “What really matters is how much time you sink in.”

Or you can imagine an even worse scenario. Assume Whatsits are ordinary items available to all players, including those who don’t seek the Legendary Polygonal Reward. It’s easy to imagine that the first person ever to get Whatsit #3 did so by accident. This person then posted online “I just got a ‘Whatsit #3’ from this rare nasty in the lake, but I don’t know what it is so I sold it.” Then you, my roommate, searched online for a Whatsit guide, found the post instead, and camped out by the lake.

No one sought the item and found it. No one solved the challenge.

No one “consumed the content.” No one played the game.

I argue that methods to slow down the player’s consumption of the content are ways to stop the player from playing. It’s long been known by many (including my roommate) that the sheer drudgery of filler gameplay is no fun. I argue that this approach, in its inevitable extreme, is the polar opposite force to gameplay: the “anti-gameplay.”

Thus, by definition, this version of solution B is bad game design. And if only the story stopped there.

In years since, I’ve seen the games on . . . oh, you know. That popular website. The one that took Livejournal and traded all the good features for a million incomprehensible privacy menus. Anyway, these games adopted a new game system: the “energy system.” For those blissfully unaware, this limits player actions or choices (in game design, “choices” and “things to do” are synonymous) by assigning a cost in some resource called “energy.” How do you get more “energy”? By waiting.

When I saw this, I couldn’t believe it. It was like an advertiser or politician using doublespeak to admit terrible wrongdoing but call it “an exciting innovation.” Energy systems are a mathematical in-your-face implementation of anti-gameplay: they define how you will not have fun now. And more games are released that use solution B this horrifically all the time.

So now that we have plumbed the depths of depravity, is there any way upward? Fortunately, yes. Solution C, making a game that’s actually fun, is still there for whomever dares the attempt. Some would argue that player competition, like the chess example, is an inexhaustible source of this. Developers are also wising up to the sheer breadth of solution A: by providing level editors and easy game-modification tools, developers let the players make their own new content. This they do gladly, sometimes going so far as to make nearly-standalone games that may then be developed and sold as a new product.

The opportunities are there, and I would say that the quest for content should lead in these more positive directions, not down the frightening spiral into anti-gameplay. That way lies madness. Which, if you’ve viewed Jonathan Blow’s speech, you know might be more then hyperbole.

If you’re going to read my blog, you should know one thing about me: I like to learn from the mistakes of history. That is, I like to generalize knowledge from one situation and apply it to another comparable situation.

See? I just did it. The idea of “learning from the mistakes of history” is that you’re supposed to understand the meaning of once-learned lessons, then apply them when you find yourself in a comparable situation. So then I said “I like to generalize knowledge,” which is a generalization of that common advice. People often grant me little more than a blank stare when I draw parallels between lessons learned, so I wanted to give you a “generalization alert” before I began really writing.

Today I want to point out that the following two things teach us the same general lesson about game design, and it’s an important one.

First is in the post title: “The Paradox of Choice,” a speech by Barry Schwartz in a 2005 TED talk. To quote:

“When people have no choice, life is almost unbearable. As the number of available choices increases, as it has in our consumer culture, the autonomy, control, and liberation this variety brings are powerful and positive. But as the number of choices keeps growing, negative aspects of having a multitude of options begin to appear. As the number of choices grows further, the negatives escalate until we become overloaded. At this point, choice no longer liberates, but debilitates.”

Having too many options leads to paralysis, not freedom. I’ll assume you’re convinced by his speech (and countless examples you’ve encountered) and move on.

The second is a principle of game design. In the advertizing for Jonathan Blow’s Braid is the following:

“Every puzzle in Braid is unique. There is no filler.”

When developers come up with a new game mechanic, the challenge is to fully explore the mechanic . . . and then stop. Giving the player “different relevant things to do” is fun. Forcing the player to do “the same thing twenty times” (“filler”) is not fun.

I argue that these lessons are the same. In game design, “choices” and “things to do” are synonymous: unlike with books, for games the audience is an active participant, and every choice means actively doing something. So, after playing a videogame RPG once, a player may “choose” a different character class and then have more “things to do,” playing the same game in a different manner.

The lesson, then, is “A well-designed game has the correct amount of choices, elements, mechanics, and so on, with little excess.” Need more quotes to convince you? Okay, have some Shakespeare:

“Therefore, since brevity is the soul of wit / And tediousness the limbs and outward flourishes, / I will be brief”

And Lewis Carroll:

“‘Begin at the beginning,’ the King said gravely, ‘and go on till you come to the end: then stop.'”

Are we set? Because, at this point, many designers — and players, and readers, and advertizing executives, and anybody else in “our consumer culture” as labeled by Barry Schwartz — would agree that one should give the player things to do, yet have no idea what I mean when I talk about “brevity” or “stopping when you come to the end.” After all, consider the RPG example: if people have played the game once for each character class, what could be better than giving them another character class so they can enjoy the game again?

And why is “chess” in the title of this blog post?

In my next one, I will explain with examples. And, well, brevity may be the soul of wit, but this will take some length.

I open with a question. It’s a question about writing stories, either in performance, print, or game form.

I had the privilege of hearing Sir Terry Pratchett interviewed during the 2009 North American Discworld Convention. He explained where he stood in the oft-described duality of “plan it all ahead” versus “learn what the story is by writing it,” and he had good words to say about his own “emergent” experiences with the latter. But then he said this:

“And it took some time for [the most recent book] to tell me what it was. Which is not the same as the plot. It is the same as the point. What is the point of the book?”

So I ask: what’s the point of your writing?

This varies by type of writing. Business and technical writing’s “point” is often to effect a change, a fact I learned from the worst diagram I’ve ever seen in a college textbook. Half a page of empty space: in the left was a box for “The way things are now (Present State)”; an arrow stretched across the middle, representing your writing or communication; the arrow then pointed to the right, with a box for “The way you want things to be (Goal State)”; and the plain caption read “Your writing goal is to bring about change.” More realistic steps ensued on the next page, but the point could not have been made worse (or better) than by this one diagram.

Story writing’s “point” is often a moral: a message, a feeling, whatever the reader takes away in the end. Much to the chagrin of Calvin from Calvin & Hobbes, who, in commanding his father to edit his bedtime story, concluded:

“It doesn’t have a moral, does it? I hate being told how to live my life. Skip the moral, too, ok?”

Such a command is futile for most stories, whether we’re talking about a book, a movie, or a play. Imagine a tale where a nasty person betrays all of his or her friends, starts a big fight sequence (that one’s for you, Calvin!), and then dies. The moral of the story? Don’t betray all of your friends, or else you’ll be a nasty person and you might die (fight sequence optional). Understanding the story is synonymous with understanding the point.

Now what about the story in a videogame? What’s the point of game writing?

That may be tricky to ask, certainly if one approaches it from the standpoint that games aren’t “real storytelling.” Of course the versatility of games means they can be anything, from educational presentations to adrenaline fests to solid written books of nothing but story. In a massively-multiplayer online RPG, one may hear the maxim that “you should never be more than two minutes from combat” (held at Blizzard Entertainment and elsewhere) and conclude the writing’s “point” is to transition between fights. This is sometimes true. However, when you look within those big fight sequences, you still might find nasty people who betrayed all their friends.

Videogame writing, if done for a story, shares the same “point”: a moral. Storytelling is storytelling regardless of format. In my next post, I will delve into examples.