The title image for my presentation is a little facetious. My thesis is not about World of Warcraft as a paradigm of design, nor is it even about the possibilities of game engines as design environments (though I’d like to see an attempt.) The image was chosen partially as a provocative seat-filler, and as a shot across the bow: self-seriousness will not be considered as an important metric when judging the relative fitness of computational design environments.
But before we move on to the next image, let’s give this one it’s due. Because, orcs and white tigers aside, World of Warcraft has some of the most sophisticated interface design of any consumer software in history. This is an interface that has its own API. Its user base is so dedicated that there are not only open-source skins and addons, there are open source IDEs dedicated to producing new skins and add-ons. This is to clicking and scrolling what muscle cars are to driving– so much innovation piled on top of itself that the question of “authentic” or “original” loses meaning and the only real metric becomes relative performance. You can write methods and tools into this interface that contextualize themselves to the environment around you, or to the time of day, or to what kind of mood you are in. In short, the interface is the environment. And what an environment it is, with embedded physics, dynamic lighting, and hundreds of visual effects and glyphs that let you know exactly what is going as clearly and quickly as possible. And on top of all of this is a sophisticated, dynamic method of communication that combines aspects of IM, forums, text messages, and email into a single communicative package.
So the question becomes: why the hell don’t I get to use this at work?
With the advent of powerful building information modeling and design development interfaces, as well as a panoply of other new technologies, architects now have access to unbelievable amounts of embedded or intrinsic information. Enhanced geometric information like curvature, slope, and curl. Structural data from finite element analysis. Material data, cost data, project schedules and team makeups. Environmental analysis results. GIS contextual info about rainfall, wind, demographics. Satellite imagery, geotagging, foursquare check ins. FOI requests on security cameras pointed at public space. And yet, with this ocean of information, we have the tiniest of straws to access it in a dynamic and understandable way. The above image shows fabrication data about a curtain wall, but in order to be published the drawing had to be run through an entire secondary graphical process to make the information understandable. Why is this necessary?
Here we have four images of the same project, all produced by Autodesk Ecotect, showing entirely different types of information: wind pressure, wind temperature, insolation, and visibility. Despite the wide variety of data types, the same method is being used to show all four of the sets: a rainbow heatmap. First of all, this is perhaps one of the worst ways to accurately show quantitative information. The colors used all have different brightness values, distorting the scale and making certain equal steps in value seem farther apart then others. It’s also notoriously hard to convert to a form that is machine-readable, making it difficult to incorporate into a computational strategy. It’s notoriously sensitive to the von Bezold Spreading Effect. And what do you do if you’re colorblind? At the very least they could have used Colorbrewer (or, dare I say, made it greyscale). But even then they’re eliminating valuable information and making impossible to compare any of these datasets in the same image. Why not use a method that also shows wind direction to talk about pressure or temperature? Why not indicate the point of view when talking about visibility? Heatmaps are used for two reasons : a) they look “scientific,” and b) they are easy to program. If we’re going to understand any of the quantitative data underlying design, we have to do better than this.*
I would argue that in order for data-driven, computational methods to ever make an impact in the practice of architecture, the basic definition of architectural output has to be changed from a geometric idea of drawing to one of data visualization. The definition above allows for the incorporation of nonspatial or extraspatial data, while maintaining the agency of the designer as priority one. The computer is recast as a form of augmentation– what visual researchers call the “cognitive co-processor.” The design process becomes one of synthesis and feedback, where intuitive exploration is combined with algorithmic optimization to produce results that surpass the abilities of either method on its own.
If one is inclined to understand design as a process of problem solving, than the problem domain is always going to be ill-defined, discontinuous, and unpredictable. Using data visualization and rich feedback gives the designer the ability to explore non-“optimal” solutions in the service of other requirements or desires that cannot be described in an algorithmic process.
So, what does all of this have to do with games? A visualization environment is by definition providing guidance to reach a goal, however unfocused or poorly defined that goal may be. The exploration of possible solutions or paths to that goal, with accurate and well-defined feedback to help in the exploration, is more similar to a puzzle or game environment than it is to drafting. In fact, games themselves, if well designed and popular, can showcase the very best of the human ability to find patterns and extrapolate results. The above diagrams were drawn by Jamey Pittman to describe the AI and interface details of Pac-Man. In the decades after its release, Pac-Man was so thoroughly analyzed and reverse-engineered that players were able to achieve perfect scores and essentially beat the game – a task that the game designers had tried to make impossible. More incredibly, the majority of this analysis was done purely through gameplay, and was then later confirmed through examination of the source code. We have within our skulls the most powerful visual and spatial analysis tools in the world. It is time that our software took advantage of that fact.
Jane McGonigal, in a 2010 TED talk, claimed that the average young person in a country with a strong gaming culture will have spent 10,000 hours playing video games, which is incidentally the same amount of time they will have spent in school between fifth grade and high school graduation. This “parallel track of education” is attuning the latest generation to pattern-finding within a certain type of interface. If we are to leverage this ability, it might be a good idea to examine the interfaces themselves.
Coming Soon: Pt.2: The Monolith vs. the Ecosystem – FoldIt, Utile, and a million-dollar gamble.
*If you want to learn more about the human visual system, please please read “Visual Thinking for Design” by Colin Ware. I would make this book required reading for any designer if I could.







