NightSky and Knytt Stories). To be able
to find mechanics that humans have
invented, even on such a small-scale
experiment, is extremely encouraging.
At this point Mechanic Miner can
generate new game mechanics by looking at a game’s program code, and it
can test these mechanics by trying to
use them to solve an impossible level.
But there’s more we can get out of this
technique. Up until now, Mechanic
Miner has generated lots of game mechanics, and sorted the good from the
bad by testing them on a single level design. We can turn this process around
to generate level designs instead. Mechanic Miner can generate lots of potential game levels, and use a single
game mechanic to test which levels
are good and which are bad. If we take
one of the mechanics invented by Mechanic Miner, such as gravity flipping,
and start randomly generating levels,
we can test levels to see if they need the
mechanic in their solution. First, we
play the level using the basic template
game. If we can’t solve it, we play it
again with the new mechanic added in.
If that results in the game level being
solvable, then we can infer that we need
the mechanic to complete the level.
Of course, we can add in extra evaluation criteria if we want, too. Perhaps
we want to design levels that have lots
of danger in them, or that require long
pathways to complete them, Mechanic
Miner uses a technique called computational evolution to produce level designs and game mechanics. This allows
us to combine lots of different criteria
into a single evaluation step. Using the
solvability of a level as the central part
of its evaluation, however, lets Mechanic Miner focus on designing levels that
require a certain mechanic to solve.
Levels and mechanics produced by
Mechanic Miner made up the core of A
Puzzling Present, a game released on
desktop platforms and the Google Play
store in December 2012. The game featured three worlds, each with a unique
mechanic, and 10 levels in each that required the player to use the mechanic.
Nearly 6,000 people played the game
over the Christmas period, and it entered the Android games charts. This
was not only a rewarding opportunity
to see people interacting with our research, but we were also able to collect
Programming is not
merely a means for
getting things done,
is being leveraged
lots of interesting data through surveys
and game activity logs. We’ll be analyzing this data to improve Mechanic
Miner, and understand how people interact with the game mechanics.
Mechanic Miner is a very basic system for generating game mechanics
and modifying program code, but it
shows us what promising and exciting
results we can get from even the simplest of systems. It also lets us think
about the next steps we could take to
move this area of research forwards.
Currently, the system only changes
individual game variables. In order to
really look at the question of software
that writes software we would need to
expand this to generate sequences of
expressions, create new objects in code,
and modify existing program code too.
We should also aspire to more complex output. While the game mechanics
generated by Mechanic Miner are interesting, they don’t match up to the variety and depth of mechanics in the average game released today. Games like
Valve’s Portal 2 feature complex game
mechanics that combine and complement one another in interesting ways.
We’ll need to expand our evaluation
techniques, to do more than just test
black-and-white scenarios, in order to
get more interesting results out of this.
There’s also the thornier issue of
whether usefulness is a good way to
evaluate game mechanics. Angry Birds’
catapult isn’t just a useful mechanic; it’s
fun. It’s enjoyable to ping cartoon birds
across a map and knock things over.
Mechanic Miner can detect usefulness
automatically, but fun is a very difficult
(and controversial) thing to formalize.
For me, one question dominates
future work over all others—what do
game mechanics mean to players?
When we play games, we do not
engage directly with the program
code that runs them. When we press
the jump button, we see a character
animate on screen, and its position
change as it launches itself up into the
air. Behind the scenes, numbers are
changing and little else. We don’t perceive Mario as an image file with X/Y
coordinates and a velocity variable. We
experience him as an Italian plumber
who can jump and collect gold coins.
Remember Anna Anthropy’s description of games: “experiences created by rules.” Games are filled with
rules, including game mechanics,
which convey experiences on the player. Not just big experiences like feeling
sad or triumphant, but little pockets
of micro meaning. A Mario animation
moves up the screen, and we interpret
this as jumping. Mario touches an image of a mushroom, and it disappears
while Mario grows bigger. We interpret
this as him eating it. We interpret Mario’s growth as an increase in strength.
This is drawing on a lifetime of experience in the real world (and often, an experience of the unwritten vocabulary of
videogames) that ANGELINA does not
have right now.
If we want to build software that
can create games autonomously,
right down to the level of code, then
it seems that we will eventually need
to find a way for software like ANGELINA to understand this bridge between the game’s internal logic and
the real-world concepts and experiences that those rules and systems
represent. This bridging knowledge,
the translation between the game
world and the real world, will not
come easily for computationally creative systems. But the ability to do
so will allow ANGELINA and systems
like it to create games with genuine
meaning and artistic depth, and allow computational creativity to flourish within the games industry.
Michael Cook is a Ph. D. student at Imperial College’s
Computational Creativity Group, researching techniques
for automating the design of videogames. He holds an
M. Eng. in computing from Imperial College. Updates on his
research, published papers, and playable games made by
his Ph. D. project AnGElInA can be found online at http://