“The more you learn, the more you know you don’t know” is one of the lessons I learned through doing this project so far.
Inspired by the Theseus and Minotaur maze recreated by computer programmers, I concluded my project proposal with the initiative to design (or let the computer design) game mazes using genetic algorithms. And since then I have looked through a myriad of online sources related to mazes, both in terms of how different genres of mazes affect the player and their navigation, and in terms of the processes and algorithms for generating different types of mazes. (There are more complex scenarios, such as the logic maze, where the navigable space is dependent on rules and variables, but these are far beyond the scope this project.)
The Design of Maze
Maze design is by no means new, and there are countless predecessors who have scrutinized mazes in much greater depth and breadth than I could ever achieve. Amongst the resources I came across, a computer programmer named Walter Pullen has put together a very detailed glossary of classifications and algorithms for maze: http://www.astrolog.org/labyrnth/algrithm.htm. I learned a lot from each sections of his posts, and the “Routing” section is particularly interesting and helpful in regard to the scope of my project.
He specified a few styles of routing on his website:
Search and Path Finding Algorithms for Evaluating the Maze
I looked into multiple path-finding algorithms because I think that in order evaluate each generation of maze, criterium such as total number of paths, dead ends and loops, and the length of paths are necessary when scoring each phenotype of the maze DNA.
I studied both depth-first search and breadth-first search and how they are demonstrated through code. However, both methods are good for picking one ideal path, as oppose to storing every single path of the maze. And in the recursive functions of the codes, there isn’t a mechanism for recording or keeping track of previous paths.
The “iterative depth-first search” is an approach that I found quite interesting but might not have time to explore in this project. It is a depth-bounding search approach, that repetitively does depth-first search to an increasingly deepening levels of a tree(search graph).
Graph Theory Related to Maze
Seeing the maze as a graph is an interesting idea because doing so prioritizes the abstracted interconnections between nodes(or vertices), and the visual form of these connections is only the secondary step.
The up side of this approach is that the designer (or the program) can observe and arrange the nodes at the conceptual level. The down side is that the maze geometry constrains the maximum number of edges each vertex can have.
My Approach So Far
Randomly generate the walls:
I want to represent the maze by creating a series of Cell objects displayed as columns and roles. Each Cell object(or instance?) is responsible for two walls (right and bottom), and stores attributes such as its index ([col][row]) and state (boolean visited), and the checkNeighbor() function asks that the current cell check its neighbor cells’s walls and it own walls to determine the accessible branches.
The first few generations of the maze should be made with randomly assigned walls.
Another way of indexing the cells that I though of but haven’t implemented yet is label the cells through their distance(number of necessary steps) to the origin. Maybe indexing cells this way can provide another sort of variable in the DNA array and product more efficient result genetic crossovers and mutations.
I’m currently stuck in the process of testing and scoring these generated mazes. I want to design a search algorithm that identifies and stores each path (from Start to a dead end or the exit). There’s some issues when working with stacks and recursions that I need to trouble shoot…
The current version of the code for Algorithmic Maze is here:
After using machine learning algorithms to generate the 2D maze, I want to extrude all the “walls” of the maze, and transform the maze into a game space that people can immediately play with,
This is a randomly generated 3D maze, made in Processing P3D and provides a more or less “immersive” navigation experience for the player. I envision the machine learning algorithms to be used, directly or indirectly, in game engines such as Unity, and help generate VR spaces in place of the traditional game designer.
(More project descriptions to be updated…)
For my final project for Nature of Code: Intelligence and Learning, I would like to investigate how generic algorithms can be used to design the landscape in video games. (Despite the fascinating prospect of convolutional neural network and mesmerizing ways of training and using pre-trained CNN models, I’m actually quite interested in the first half of this course, where we were designing the evolutionary processes and setting the parameters for “natural selection”, and visualize the evolution over time.)
On one hand, because game design concerns with chances and rules that affect how players behave in the game and how difficult it is to accomplish the task or to win the game, I think it would be very interesting and rewarding to try to translate those considerations and design decisions of video games into machine learning algorithms.
On the other hand, I came across Lev Manovich’s Navigable Space (1998) while in undergrad, and was very inspired. He compared and contrasted Doom and Myst (both published in 1993), and how the idea of “space” is different: in Doom, the player is prompt to navigate the space and overcome enemies in the shortest time possible, and there was even an “open source” feature whereby users can create their own advanced levels; in Myst, the space is predefined, and the player “wanders” in order to find necessary information to decode the challenges. But the two games are in common in terms of how the player’s experience is based on “spacial journeys”
While the blueprints of video game space is defined by game designers, based on their skills, experiences and user tests, I wonder if genetic algorithms can help create a controlled blend of geographical complexity, number of challenges(enemies), rewards, and over difficulty in the design of a “navigable space”. What’s more, algorithms can also be used to “test” the generated game space, allowing rapid iterations.
This project proposal is also inspired by the idea that gamenification, or thinking about real-world phenomenon by abstracting them as games, has been an fruitful way for people to better understand things happening around them. For example, Guy Debord famously designed the Game of War at the late stage of his career (1987), and recreated the militant situation between north and south France in a particular history period. As media scholar Alexander Galloway describes it:
“He(Debord) positions chess firmly in the classical period of kings and corporal fiat, while the Game of War belongs to a time of systems, logistical routes, and lines of communication. In chess, spatial relationship between pieces are paramount; the ‘knight’s tour,’ for instance, serves as a classic mental projection of pattern and recombination. In the Game of War, Debord maintained this attention to spatial relationships, but added a degree of complexity. The ‘liaisons’ in the Game of War are not simply the projections of possible troop maneuvers, but a communicative apparatus linking together far-flung fighting divisions. If chess’s king is an intensive node, one that must be fortified through the protection of its allied footmen, then Debord’s arsenals are extensive nodes. Yes, they too must be protected, but they also serve as the origin point for a radiating fabric of transition. In chess ‘the king can never remain in check,’ but in the Game of War ‘liaisons must always be maintained.'”
In fact, Galloway himself had recreated Debord’s game in the form of an online two-player game, written in Java, and won a Golden Nica from Ars Electronica…
Due to the complexity of the subject, I think it would be better to first star with some simpler games, such as recreating the PEC-MAN maze using algorithms.
(To be updated.)
Notes in Justification of Putting the Audience Through a Difficult Evening is written by Wallace Shawn in 1986 as the preface to his play Aunt Dan and Lemon.
Shawn begins by discussing our reaction to watching the talks by historical tyrants and dictators, and how the it can be similar to watching a comedy. Indeed, he points out the such reaction of ours is dramatically different and isolated from the original context – the suffering of people and tragedy of society in that particular time of history.
Once removed from the time and location where the powerful deceptions took place, Shawn suggest that it would be hard to imagine the magnitude of the politician’s impact on its original audience, therefore hard for the contemporary audience to experience the sense of deception.
Obviously, if we can make ourselves believe that all of Hitler’s words are lies, we can confidently conclude that he’s an evil person. However, once a fraction of his words start to appear truthful to us, then the whole picture we conjured up about him can so easily be called into question. The idea that Hitler loves his dog, for example, is in itself a trivial fact, but can crumble down our sense of certainty about what kind of person we believe he is.
Hence, the power of a play, as Shawn believes, is not merely to present a fully furnished little world for audience to peek in and comment on. He believes that the audience can feel more closely to the characters in the play if they had to think simultaneously and encounter the same mental confusions as the characters do. Therefore, if the playwright is to end the play with a certain answer, or a satisfying and rational ending, then it means that the audience’s task of untangling the their thought are cut short and their engagement with the story less intimate. So by concluding the play with sad ending and the main character failing to defeat her erroneous thoughts, the playwright hope to hand over the job to the audience, and make the struggle a shared experience.
Curiously, the reading reminds me of recent rendition of Shakespeare’s Othello at the New York theater workshop, directed by Sam Gold (I didn’t win a ticket to it but read about it a lot online). In this production, the set designing revamped the “black box” theater and turned the entire space into one plywood enclosure – even the seats are made with the same material as the “stage”.
The play now has a different theatricality, when the audience and the actors are perceptually in the same room and under the same light. As the director puts it:”In Othello it takes the form of this kind of dramatic irony, where the story is told to the audience by the antagonist, so they know before the other characters do what’s happening, and that makes the audience complicit”.
The removal of the stage’s fourth wall and the integration of performance space and the audience means the continuation of the play from the actors to the mind of the audience. I think this reconfiguration of theatre space is analogous to the unclosed ending of Wallace Shawn’s play.