Patterns and Design

In the comments discussion on this post by Daniel Davis, the concept of design patterns was randomly (inevitably?) brought up, as a strategy in programming and UX design that might be used in architecture to deal with increasing complexity in a design project.
To quote someone we will spend the rest of the article discussing,

“[a design] pattern describes a problem that occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice.”

The use of patterns not only standardizes nomanclature for common problems or conditions, but provides an easy reference to proven solutions to those problems and conditions. There has been an increasing interest in patterns in computational design circles, and with the recent release of “Elements of Parametric Design” by Woodbury et al, ideas have begun to converge, and various attempts at CD pattern libraries and their corresponding implementations have been started using a wide variety of tools.
This new interest architectural design patterns is actually the end of a forty year full circle. Patterns as a concept were invented by the architect Christopher Alexander, who was drawn to the architecture department at Cambridge University after initially planning to study computer science, and ended up being part of arguably the first computational design program in history. This combined passion for both data structures and the built environment led him to describing design as a language of connected patterns, a non-computational relative of his work on shape grammars and parametric design while at CU. Alexander invented a standard structure for the description of a pattern, beginning with the description of a problem, its enumeration, and finally a “therefore” statement suggesting a solution or desired condition. These patterns could then be linked using many different methods to create a non-formal, abstract description of a comprehensive design solution.

Image
When my uncle (an accomplished architect in Los Angeles) found out I was going to architecture school, he immediately sent me copies of Christopher Alexander’s books A Pattern Language and The Timeless Way of Building. These books, written in the late seventies, are more or less a complete expression of the idea of architectural patterns, and to a young person looking for a book that layed out the “what” “why” and “how” of architecture, were revelatory.
My first year of school, in a history and theory class, I learned that not all ideas are equally welcome. I brought those books up in discussion, and the professor didn’t even bother to respond with more than a “don’t waste my time” look and a quick change of topic. I put the books away and didn’t revisit the concept for years. What happened in the decades between this book being published, by a well respected architectural theoretician through a major academic publisher (Oxford) and my unwelcome invocation in History and Theory 101? Well, a lot.

One clue can be found in this debate at Harvard between Alexander and Peter Eisenmann, on the eve of postmodernism in 1982. If I had my way this would be immediate, mandatory reading for every student of architecture, as somehow it’s thirty years later and we still have not gotten past many of the precepts of this argument. Eisenmann was a contemporary of Alexander’s at CU in the sixties, and likewise interested in computation, but this debate shows two very different opinions about the value, meaning and purpose of architecture. Alexander is making a humanist argument for an anthropological basis for building- an idea that there is a deep, human basis for architecture that can be traced, derived, and codified by examining not only buildings but ourselves and our world. Eisenmann, ever the deconstructionist, counters with an architecture that is a formal language unto itself, cut free of any dependence to history or humanity, that can be examined, critiqued, and designed purely in the abstract, clear of the confinements of an imposed humanistic imperative.
What is surprising in hindsight is that, in the transcript, it seems like the crowd was more inclined to agree with Alexander at the conclusion of this debate. However, anyone remotely familiar with the the last thirty years of architectural theory mist realize that the view of architecture expressed by Eisenmann was the one that ultimately ended up being discussed, accepted, and ultimately enshrined in the subsequent decades, to the point where my casual reference to A Pattern Language twenty years later was treated like flatulence in a cathedral.

There are a lot of reasons that a young student of architecture in 1982 (or now) might be more excited by deconstruction than patterns. Alexander’s implementation of the concept is based on a historical determinism that, particularly to someone young and bold, might seem stodgy and limiting. The language of the patterns themsleves, with a lecturing tone and repeated “therefores” is almost a perfect foil for a rebellious youth, and would have a great deal of trouble competing with the (at the time) far edgier world of French philosophy. The book even looks like it was published in the 1940’s, presumably because that is the timeless way of printing.
But the main reason we aren’t all using this book as a bible is the content itself. Most of the patterns in the book are plainly useful, in fact aimed directly at the user – the back cover of the book proclaims a “new traditional post-industrial architecture, created by the people.” But the utilitarian quality of many of these patterns – “Built In Seats,” “Outdoor Rooms,” “Row Houses,” are somewhat of a letdown after the architect-as-industrial-sorcerer paradigm popular in high modernism. A Pattern Language laid architecture out as a rational collection of proven concepts, altered to suit user and context but sharing in ideas as old as humanity itself. Many of the patterns presented in his book are less like the patterns seen in programming, and are more like rules of thumb. Anybody passionate about their profession will bristle at the idea that what they do can be broken down into a simple set of guidelines, and this reaction is based largely on the truth that the whole is immensely more complicated than the sum of the parts. For all of these reasons, A Pattern Language seems doomed to a status as one of the many cul-de-sacs of architectural theory.

This, however, is not the end of our story. As I suggested above, Alexander’s concepts have had resonance in other fields, such as programming and interface design. The idea of patterns was an excellent way to organize and understand object oriented programming, a parallel development that is now part of the underlying architecture of most commercial software. Software patterns are noticably more abstract than architectural patterns– a particular example, like “Reactor” is more of a general concept that can be applied in multiple ways, scales, and situations. This makes software design patterns ultimately less suggestive, and easier to adapt to a variety of practices.

Image

In the last few years patterns have returned from their roundabout journey back into architecture, this time as an organizational practice for computational design. As such, the patterns themselves are very different, having less to do with the physical form of the architecture itself than design methods used in the process of scripting. These new patterns have very little precedent in architectural practice, as they are not tools (like a t square or the offset command) nor heuristics (like a 30′ column grid or proscribed parking ratios) nor stylistic imperatives (like the classical orders or digital neobaroque NURBS). They are, instead, meta-organizational principles, that are at least one step removed from the design of the form itself, such as the idea of a “goal seeker,” or recursion, or a placeholder. In short, these are components of a digital workflow, an outline for how a computational practitioner might work efficiently and flexibly, keeping the organizational BS to a minimum. The use of these patterns also presupposes an algorithmic, computational approach to a design problem, which begs the question – are there non-computational design patterns that fit this meta-organizational mold? Such a pattern would not shape the form of the design, but rather the approach. For instance, “pattern-language based collaborative design workshops” would be one such pattern that might make Mr. Alexander happy.

There is one other way to look at the future of patterns in architecture, and that is to look at the way we are already working. The practice of architecture is currently making a laborious transition from drafting (CAD) to parametric modeling (BIM). Such a transition, at the most obvious, lay level, involves a change of language and approach, from working with lines and curves to working with digital objects – walls, roofs, etc. As anyone familiar with one of these parametric environments can tell you, designing completely within a BIM environment can become a process of learning how to work with, against, or around the interface and objects you are given to reach a particular design goal. Frequently one is driven to workarounds that hack the basic intent of the out of the box parametric components. As such, working in such an environment becomes a tripartate process- first you determine the design goal, then you come up with a strategy for modeling that goal, and then you implement. That second step, which did not exist in the world of CAD, is where the kind of patterns seen above live. Given the nature of this workflow, it seems to me that an obvious step in the evolution of BIM is to separate the category of an object from its organization, so that your primitives become complex patterns with known abilities and limitations. We are already working with patterns on a daily basis, but instead of being developed for utility they are the accidental inheritance of a digital environment designed to imitate a basic understanding of building tectonics.

If I’ve made anything clear above — I doubt I have — it’s that, while patterns have a long history in architecture, their current purpose is still very much up in the air. It is my opinion that patterns are a vital ally in making sense of the still-nascent form of digital practice, an organizational tool that mediates between the goals of design and the power of computation. If computational design is going to be a ubiquitous practice patterns are going to be a necessary part of it. To put it simply, watch this space.

Patterns and Design

What Makes Design Computational

A recent post at the archinect forum has inspired my inner Ann Landers, and so I will slip on my curly Advisor Wig…

User “jojeg07” asks,

i am confused about the difference between digital and computational design. i had thought they were synonymous. my way of thinking of them now is that UCLA would be “digital” and MIT would be “computational”. is that correct? what exactly makes them different? what is considered computational and what exactly is considered digital design?

This is a good question. I spent the last year doing what was essentially a survey of the past and current state of computational practice, and it is amazing how little consensus there is on terminology and titles. While all of the terms used — digital, computational, parametric, algorithmic — have specific (and easily defined) meanings and associations, the hybrid, messy nature of a design practice often leads to a confusion or misapplication of labels.

The simple answer would be that a “computational designer” uses methods directly associated with programming and computation — algorithms, mathematics, data structures– as a central part of their design process, where as a “digital designer” has a more normative design methodology, using tools that just happen to be on a computer. The simplicity quickly breaks down, however, when you try to apply these criteria to actual practice. For instance, BIM programs like Revit have a computationally-based tree-like structure, with acyclical parent-child relationships (what has been called “parametrics”). However, most of these programs use a graphic interface that requires no previous knowledge of programming or computer science. Grasshopper is an even more direct example of this strategy, as it is essentially a perfect graphical representation of a directed graph structure in a computer program, similar to Scratch.

Do programs that use sophisticated computational methods inside of a friendly, graphical user interface belong in the canon of computational design? Well, given my outsider status as a programmer dilettante, I would say absolutely yes. Even if the user isn’t wholly aware of computational strategies being utilized, the approach is still taking into account algorithmic and parametric methods. Throwing these users out would be like programmers completely dismissing people who code in Python or Processing — excluding higher level methods than your own leads to a one-upsmanship that eventually excludes everyone (as the xkcd cartoon below parodies perfectly).

Rule 35: There is an xkcd cartoon for any tech-related topic.

In the end, as design software becomes more sophisticated, and digital design processes start to resemble data visualization more than drafting, digital design will become (at least partially) inherently computational. I argued in my thesis that computation in design has a future as a context, not as a style. The flip side of that will be that in order for computational design to have lasting impact, we all must become computational designers. And this isn’t going to happen by getting every graduating architect to understand search algorithms and integration methods. It’s going to happen by people bridging the gap and making software that harnesses powerful computational methods to the inherent spatial and visual abilities of design thinkers. These new computational designers might not know if their design problem is NP-complete or not, but they will understand how to use machine methods in design the same way we know how to catch a ball or read a face – as an inseparable, inherent part of their working knowledge.

What Makes Design Computational

Thesis Presentation : Pt. 3: Stuff I Made, Next Steps

Given the rather diffuse nature of the previous presentation, I chose to do three small projects to explore and illustrate applications of data visualization and game interfaces in design environments. The first project was a fairly dry implementation of a common data visualization method — treemapping — to a common source of architectural data — program documentation:

Image

I blogged about this project previously so I won’t go into too much detail. The project did help to explore the tension between having a tool that does not unnecessarily suggest formal solutions, and having a tool that is formal/spatial enough to be used in a design methodology. It was also a good way to come to terms with the nature of program diagrams, and the various competing concerns that exist in the initial program explorations – assumptions about the most important relationships within the program have enormous impact upon the underlying structure of a design project.

The next project presented has also been presented previously on this blog:

Image

Due to its popularity in the last year, I felt I had to do at least a simple exploration of game-like interfaces using spring algorithms. The project brought with it some useful insights. The first and most simple is that interface strategies have associations – the side-scrolling 2d version of the applet was regarded by users as a toy while a 3d isometric version was treated as a serious tool, regardless of the fact that they had identical interfaces and functionality. The development of the applets also provided valuable experience in difficulty and payoff involved with live, additive interfaces. Both of these characteristics I feel are vital to promoting creative exploration, but require both strict attention to the runtime costs and also the development of a natural and quick interface, with no lengthy searches for commands or tools. Finally, research into developing a web-based multiuser version of the tool has convinced me (as it has some other folks) that WebGL, WebSockets, and the Canvas tools in HTML5 are going to be the new power tools in creative software development. Watch that space, people.

Image

My final (main) software project attempted to take all of the ideas above, alongside some additional computational strategies, and make a usable tool. The base algorithm I chose to work with was tensor field streamline integration. This is a technique that has been used in MRIs, structural analysis, and wind flow simulation, but my plans for use are more similar to what has been done in computer modeling and graphics. The procedural generation of tensor fields is something that is now commonplace to remesh 3d surfaces or make painterly or sketchy effects on raster images (think photoshop filters). This method of pattern generation is simultaneously powerful, requiring a minimum of input to a maximum of effect, while remaining intuitive and predictable, even as it increases in complexity. It is also possible to maintain a real-time editing environment, which was vital. This algorithm has a myriad of possibilities in architecture, from surface discretization to street map generation, to pattern generation. I chose to make a Nolli Map generator, as it is entertaining and easily understood, and also gave me the chance to play with displacement mapping and pixel shaders. As can be seen from the image above, the project used a pipeline approach where user input is first used to generate a bidirectional tensor field. The input comes in the forms of lines and singularities, with the option to pick an extent of affect.

Image

Further input can be provided in the form of a bitmap (also with a range of effect) which is then put into the quadrangles created by the initial tensor fields. Multiple bitmaps are combined in an additive fashion using a pixel shader if more than one is provided.

Image

Finally, a three-dimensional view uses displacement mapping to show the spatial implications of the drawn black and white map.

Image

The project included peer-to-peer functionality that allowed for multiple users to work simultaneously on the same map, with live updating — this is a particularly interesting feature of environments that use implicit rules, as there is no need to “lock” individual elements to preserve relationships. There is also a notation function and the ability to export to dxf.

My takeoff from all of the above is twofold. First of all, I am convinced that we are about to see a revolution in lightweight, interactive digital design environments that incorporate rich feedback and the possibility of implicit algorithmic integration. I am likewise certain that the future of digital design will lie in the ability of designers to navigate between multiple tools and methods, which will make knowledge of the underlying mathematical and programming techniques all the more important. Architects must make good use of these new quantitative techniques, not only as a way to maintain agency and relevancy, but also because it will ground our nascent computational design culture and improve the status of the built world.

I’ve taken the last six weeks off to get a breather, move to the other side of the country, obtain a job, and secure a mortgage. I’m about 80% percent of the way there, and once those goals are complete I’m going to have to take a long hard look at where I want to take this research. First things first, I’m going to release some of the tensor field code — I’ve got a grasshopper c# version that needs to be cleaned up and packaged for general consumption. I’ll consider releasing the larger tensor fields project but honestly it’s such a mess I’m afraid to do it (the important bits of code are in the appendix of my thesis document which you can find here.) After all of that is done, I’m kind of at a loss. Revit API work? Architectural data visualization? Mobile apps? It’s a wide world out there. I’d better start taking some big steps.

Thesis Presentation : Pt. 3: Stuff I Made, Next Steps

Thesis Presentation Pt. 2: The Monolith vs The Ecosystem (Heuristics, Feedback, and Implicit Rules)

I left off the last post with a suggestion to find a way to leverage the methods of game environments to make richer, more sophisticated design environments. If one was looking for a real-world example of this dynamic in action, it would be harder to find one more perfect than FoldIt. Designed by a multidisciplinary team at the University of Washington dubbed the “UW Center for Game Science,” FoldIt was created to help solve one of the most complex problems in biology- predicting the folded structure of proteins. Properly predicting the forms of amino acid chains could potentially enable not only a better understanding of cell biology, but also the design or discovery of powerful new drugs. Predicting the folded structure involves finding the “lowest energy” solution for a particular chain – the best fit structure where hydrophobic parts of the protein are hidden and hydrophilic portions exposed. Unfortunately, the biomechanics are still only partially understood, and thus algorithmic solutions are incomplete and often get stuck in local minima. Often these solution to the sticking points were obvious, even to people with little understanding of the problem. The UW team, seeing this, decided that instead of additional computing power they would harvest the enormous visual and spatial processing abilities of human games by designing a competitive game based around finding the proper folded structure of proteins. This game would act as a kind of talent search for protein folding prodigies. Within weeks of release, their site had registered hundreds of thousands of downloads, and a devoted user base. Within a year, their user teams had derived solutions that beat even the best algorithmic models. One of the best teams was headed up by a thirteen year old, who claimed that he found solutions because “they just look right.” In educational parlance, this is referred to as perceptual learning — the brain’s ability to model a complex system intuitively.

Image

This strategy was not successful merely because people are great at these tasks — it was successful because human ability was married to an interface that managed to hit on all cylinders, enabling real solutions to real problems by people unfamiliar with the context of the problem, simply by providing clear feedback, an intuitive interface, and good tools. Some of the most important aspects of this interface have been called out above. FoldIt incorporates a wealth of visualization techniques, such as motion, glyphs, ghosting, and even auditory feedback to give a clear indication of the state of the game. Clear nonspatial feedback is given with a numerical score and a score history (with, of course, an all-important “undo” function). While many of the moves are done manually by “grabbing” protein chains, there are limited stochastic search (“wiggle”) tools that allow for a player to find slightly better solutions in the neighborhood of their current position. There is also a sophisticated help and user forum interface built within the game, as well as multiple social tools such as a leader board and chat functionality, that all combine to produce a competitive but friendly community of engaged users.

Image

The work of the Boston-based firm Utile has shows a sensitivity to interaction design and the incorporation of heuristics into design strategies in the service of seemingly everyday issues of profit, space planning, zoning, and sustainability, that warrants an extended look. “Heuristics” in this case represent both architectural rules of thumb (such as 30-foot column grids) as well as more sophisticated rules based upon related fields, such as development pro-formas or zoning codes. They have developed sophisticated internal tools to help provide rapid feedback on design decisions, often within client meetings, to help ensure that collaborative design decisions are grounded in objective reality. For developer clients, they have integrated a simple pro-forma tool within the schematic design modeling tools in Revit, that allow for an immediate comparison of the monetary value different general strategies for a specific parcel. They have found that this process, instead of being limiting, is ultimately liberating for their design process, as it provides a solid basis at the start that prevents a client from making drastic changes farther down the line, using efficiency as a justification.

Tim Love, a principal at Utile, has also experimented with using heuristics and feedback as didactic tools in a Yale design studio. At the outset of the studio students were provided with an “Urbanism Starter Kit” that comprised of BIM families of several building types, that incorporated many levels of heuristic devices, from a 30-foot structural grid to floor plate width rules to exit strategies. The students were made to use these digital “building blocks” at the earliest stage of design, with a specific ratio of square footages for each type. Immediate feedback from the BIM model was used to ensure that rules were being followed and enabled live tweaking of the designs. Only when these restricted designs had reached a sufficient level of performance were the students allowed to modify the structures themselves. Despite severely limiting the freedom of the designs at the most conceptual, schematic phase, the final output of the studio displayed a range and variety that belied its origin. In addition, this limited beginning ensured that the projects maintained a certain level of performance as an urban agglomeration, allowing for direct comparison of different solutions as well as freeing the discussion of the projects from purely practical concerns. Finally, the act of struggling against the given constraints was in itself a valuable experience for the students, as it introduced a range of strategies to overcome the limitations of the given structures that, while common in practice, are rarely explored in academia.

Image

If one is looking for confirmation that visualization and feedback are increasingly important in design, you need look no farther than the Autodesk Labs, the R&D wing of the billion-dollar CAD behemoth that provides the vast majority of architectural software solutions in the U.S. Several recent Labs releases, most notably the Galileo and Vasari projects, are lightweight, simple schematic environments that incorporate multiple levels of analysis and feedback within the tool itself. Vasari is particularly interesting, as it is ultimately the combination of several products that Autodesk has already developed: a simplified version of Revit’s schematic design package, with the Nucleus physics engine from Maya and environmental modeling tools from Ecotect. It is, however, more than the sum of its parts. Autodesk has, first of all, made the program free, and has packaged it as a simple executable, which promotes sharing among users. They have also built in some collaborative features, such as a dedicated user forum. Finally, the incorporated analysis is geared towards quick, even real-time results, which promotes a rapid response loop in the design process that approaches a game-like environment. Particularly in the incorporation of the Nucleus physics tools, these methods start to look more like the kind of implicit rules you’d see in a video game, rather than explicit ones applied selectively, that you normally see in parametric design environments. This distinction is vital in transforming a program from a passive tool to an active symbiotic collaborator, as it allows for the kind of perceptual learning described above.

Image

Even if it is fairly certain that these feedback methods will make it into design software, the strategy for implementation is still very much up in the air. Currently there are two main strategies being used by design software companies to produce complex products. The reigning solution for the last few decades has been that of the “monolith,” an all-encompassing, proprietary, black-box program that attempts to have a feature that meets every possible user need. This worked well when the goal was 2d CAD, but newer parametric software has tended to be overburdened, buggy and confusing, as the development teams go chasing too many rabbits down too many holes. The end result of this strategy ends up being something similar to Digital Project/CATIA, with multiple “workbenches,” tiered pricing structures, baffling documentation, and a learning curve that is essentially a vertical wall. In addtion, monolithic software often ends up so large it has difficulty making transitions to take advantage of changing technology or practice. The alternative “software as ecosystem” strategy is currently exemplified in the world of design software by Rhino. The idea is very different – create a stable, well documented core program, ideally with some open source component and a fully developed API, and enable independent developers to create extensions and plug ins to extend the capabilities of the software. This leads to a more flexible, nimble platform for design, and is perhaps better suited to the end needs of a forward-thinking architecture firm, where multiple programs are used in series or parallel to achieve different goals at different times. This diagram by SHoP of their workflow reveals this central dilemma to digital practice:

Image

What you have there is easily six figures’ worth of design software, much of it having massive amounts of feature overlap. I’m not saying that a cutting-edge digital practice shouldn’t have a big toolkit, but some of the nodes in that diagram were purchased to use only a tiny portion of their core functionality — it’s like buying a Ferrari to use the radio. Obviously, a few flexible programs would suit this sort of workflow far better than a big pile of inflexible monolithic software. However, the monolithic approach can have advantages as well — it is more stable, easier to standardize, and often more predictable. Apple is essentially a monolith, with the notable exception of the app store (which, I might add, is rigidly policed). There are a lot of people (including myself) who are more interested in a stable and easily usable interface for their phone than the flexibility and hackability of an open-source, device independent communication OS such as Android. This is, I feel, the reason that there is no ecosystem-based parametric software solution as fully featured as DP or Revit — BIM implementation in offices values stability and reliable standards much more than flexibility or openness.

Schematic design environments, on the other hand, value flexibility, independence, and adaptability, which suggests that Rhino or something like it will likely be seen on more and more screens in practice in the near future. Innovations such as Kangaroo and Grasshopper have no real equivalent in anything produced by Dessault, Autodesk, or Bentley, and I feel that is unlikely to change. The primary danger I see in the incorporation of implicit rules and rich feedback in design environments is that designers might not “push back” enough, leading to uninspired and predictable results. If the incorporation of these methods was relegated to a “black box” in the software that is a far more likely outcome. However, when incorporated in a plug-in users are more likely to customize and understand the implications of the environment they are creating, leading to a richer and less constrained outcome.

Next: Pt. 3: Stuff I Made, Next Steps

Thesis Presentation Pt. 2: The Monolith vs The Ecosystem (Heuristics, Feedback, and Implicit Rules)

Thesis Presentation: Pt.1: Data Visualization and Cognitive Co-Processors

Image

The title image for my presentation is a little facetious. My thesis is not about World of Warcraft as a paradigm of design, nor is it even about the possibilities of game engines as design environments (though I’d like to see an attempt.) The image was chosen partially as a provocative seat-filler, and as a shot across the bow: self-seriousness will not be considered as an important metric when judging the relative fitness of computational design environments.

But before we move on to the next image, let’s give this one it’s due. Because, orcs and white tigers aside, World of Warcraft has some of the most sophisticated interface design of any consumer software in history. This is an interface that has its own API. Its user base is so dedicated that there are not only open-source skins and addons, there are open source IDEs dedicated to producing new skins and add-ons. This is to clicking and scrolling what muscle cars are to driving– so much innovation piled on top of itself that the question of “authentic” or “original” loses meaning and the only real metric becomes relative performance. You can write methods and tools into this interface that contextualize themselves to the environment around you, or to the time of day, or to what kind of mood you are in. In short, the interface is the environment. And what an environment it is, with embedded physics, dynamic lighting, and hundreds of visual effects and glyphs that let you know exactly what is going as clearly and quickly as possible. And on top of all of this is a sophisticated, dynamic method of communication that combines aspects of IM, forums, text messages, and email into a single communicative package.

So the question becomes: why the hell don’t I get to use this at work?

Image

With the advent of powerful building information modeling and design development interfaces, as well as a panoply of other new technologies, architects now have access to unbelievable amounts of embedded or intrinsic information. Enhanced geometric information like curvature, slope, and curl. Structural data from finite element analysis. Material data, cost data, project schedules and team makeups. Environmental analysis results. GIS contextual info about rainfall, wind, demographics. Satellite imagery, geotagging, foursquare check ins. FOI requests on security cameras pointed at public space. And yet, with this ocean of information, we have the tiniest of straws to access it in a dynamic and understandable way. The above image shows fabrication data about a curtain wall, but in order to be published the drawing had to be run through an entire secondary graphical process to make the information understandable. Why is this necessary?

Image

Here we have four images of the same project, all produced by Autodesk Ecotect, showing entirely different types of information: wind pressure, wind temperature, insolation, and visibility. Despite the wide variety of data types, the same method is being used to show all four of the sets: a rainbow heatmap. First of all, this is perhaps one of the worst ways to accurately show quantitative information. The colors used all have different brightness values, distorting the scale and making certain equal steps in value seem farther apart then others. It’s also notoriously hard to convert to a form that is machine-readable, making it difficult to incorporate into a computational strategy. It’s notoriously sensitive to the von Bezold Spreading Effect. And what do you do if you’re colorblind? At the very least they could have used Colorbrewer (or, dare I say, made it greyscale). But even then they’re eliminating valuable information and making impossible to compare any of these datasets in the same image. Why not use a method that also shows wind direction to talk about pressure or temperature? Why not indicate the point of view when talking about visibility? Heatmaps are used for two reasons : a) they look “scientific,” and b) they are easy to program. If we’re going to understand any of the quantitative data underlying design, we have to do better than this.*

Image

I would argue that in order for data-driven, computational methods to ever make an impact in the practice of architecture, the basic definition of architectural output has to be changed from a geometric idea of drawing to one of data visualization. The definition above allows for the incorporation of nonspatial or extraspatial data, while maintaining the agency of the designer as priority one. The computer is recast as a form of augmentation– what visual researchers call the “cognitive co-processor.” The design process becomes one of synthesis and feedback, where intuitive exploration is combined with algorithmic optimization to produce results that surpass the abilities of either method on its own.

If one is inclined to understand design as a process of problem solving, than the problem domain is always going to be ill-defined, discontinuous, and unpredictable. Using data visualization and rich feedback gives the designer the ability to explore non-“optimal” solutions in the service of other requirements or desires that cannot be described in an algorithmic process.

Image

So, what does all of this have to do with games? A visualization environment is by definition providing guidance to reach a goal, however unfocused or poorly defined that goal may be. The exploration of possible solutions or paths to that goal, with accurate and well-defined feedback to help in the exploration, is more similar to a puzzle or game environment than it is to drafting. In fact, games themselves, if well designed and popular, can showcase the very best of the human ability to find patterns and extrapolate results. The above diagrams were drawn by Jamey Pittman to describe the AI and interface details of Pac-Man. In the decades after its release, Pac-Man was so thoroughly analyzed and reverse-engineered that players were able to achieve perfect scores and essentially beat the game – a task that the game designers had tried to make impossible. More incredibly, the majority of this analysis was done purely through gameplay, and was then later confirmed through examination of the source code. We have within our skulls the most powerful visual and spatial analysis tools in the world. It is time that our software took advantage of that fact.

Jane McGonigal, in a 2010 TED talk, claimed that the average young person in a country with a strong gaming culture will have spent 10,000 hours playing video games, which is incidentally the same amount of time they will have spent in school between fifth grade and high school graduation. This “parallel track of education” is attuning the latest generation to pattern-finding within a certain type of interface. If we are to leverage this ability, it might be a good idea to examine the interfaces themselves.

Coming Soon: Pt.2: The Monolith vs. the Ecosystem – FoldIt, Utile, and a million-dollar gamble.

Image

*If you want to learn more about the human visual system, please please read “Visual Thinking for Design” by Colin Ware. I would make this book required reading for any designer if I could.

Thesis Presentation: Pt.1: Data Visualization and Cognitive Co-Processors

Thesis: Done

Image

Well, after 23 slides, 80+ pages, and hundreds (thousands?) of lines of code, my thesis is complete! The presentation went well and the discussion brought up a number of ideas on where I could go from here.

The thesis is entitled “Games as Design Environments,” and looks at data visualization and gamelike interaction methods as possible templates for the future of digital design environments. I’m a bit exhausted on the topic and so won’t go into detail today, but if you’re curious I’ve put the research document in my issuu account for your easy perusal:

http://static.issuu.com/webembed/viewers/style1/v2/IssuuReader.swf>

Watch this space for future posts where I walk through the presentation in detail.

Thesis: Done

aFloat @ MIT+150 is done!

Well, after a solid week of work, piles of LEDs, acrylic and 3d prints, and a half mile of wire, our installation went up! This project was an interactive light and sound installation that was designed to complement one of my favorite buildings in Boston – the MIT chapel by Saarinen. The project involved a grid of 150 LEDs with linkages connecting them in an interactive network. Sensors embedded in the grid sense movement and create responsive light and sound effects. Our team included Otto Ng from MIT, and Senya Zeitsev and Dena Molnar from my program at MIT. The project grew out of a final project for our Augmented Architecture class last semester. Thank you to the dozens of people that helped with this project, and also to George for the fantastic photos! Apparently we had over 2,000 visitors on a single night!

Until I can get php 5 running on this server, You’ll have to be ok with a single image and a link to the facebook gallery that may or may not be public… oh well.

Image

aFloat @ MIT+150 is done!

CS171 Final Project – Treemapping Architectural Program Documents

Image
Treemapping a Middle School

It seems like all I do here is post Processing applets. Well, here’s another one. This is the final project for my CS visualizations class, an attempt to use treemapping to explore architectural program documents. The full project presentation is here, which includes some background and discussion of the goals and implementation.

I started out thinking I was going to make an adjacency diagram tool, that used graph representation to explore the topology of an architectural program. This sort of idea has been explored fairly thoroughly by Aedas R&D as well as by a BVN Usyd team. However, after some research and test implementation I changed my approach to treemapping, which drops adjacency in favor of nested grouping. The reasons behind this were based not only on an interpretation of the data at hand, but also on the relationship between diagrams and built work in architecture and a desire to make something that was not formally suggestive.

Graph representation has at least a 50 year history computational design, with the initial apotheosis occurring in the mid 1970’s at Cambridge University, in research labs aimed primarily at optimizing architecture for walking distances. This was part of a larger search for architectural problems that had objective solutions, to create a school of architecture that had a solid scientific basis. While the work done by the CU researchers was rigorous and interesting, ultimately walking distance faded as an area of research, partially because its optimal organizations often flew in the face of other architectural requirements (such as constructability) but also because the issue of adjacency was more easily solved through telephone and intercom systems. For a more in-depth exploration of this history, please read the “Fenland Tech” article by Sean Keller in the MIT press, or better yet his GSD thesis from a few years ago.

Compounding this historical warning flag was the fact that producing graph layouts of existing structures is actually a lossy, reductionist way of showing floor plan information – one early reviewer called the idea “idempotent.” There is also some inherent flaws in the method, such as how to represent circulation space, particularly looped hallways. The topology of a building’s room layout is invariably more complicated than a simple graph layout can really indicate, and usually some important detail is lost in the conversion. The BVN blog has a great post on the limitations of adjacency graphs.

I looked instead more closely at visualizing nonspatial data– that is, the horrifically boring, overly detailed program requirement documents that get handed over to architects on pretty much any large scale project. I concentrated in particular on public school documentation, as they are inextricably linked to public policy and funding, and the building type is overdue for some re-imagination. On review, it became quickly clear that adjacency doesn’t really play a strong part in program requirements, and where it is mentioned, relationships are unclear (adjacency can be defined by nearness, visibility, connectedness, etc). I chose instead to take advantage of the basic structure of the document – as a series of nested groups – and visualize this ownership hierarchy over some imagined or invented connectivity. Doing this also avoided a major pitfall of program visualization, which is that bubble diagrams or adjacency graphs can be too suggestive of a final form, leading to the architect “building the diagram.” This method takes maximum advantage of the spatial encoding of data while being as neutral to the idea of a building’s form as possible.

In the end, I’m not that enamored with the tool I’ve created, but the above insight makes the entire process worthwhile.

CS171 Final Project – Treemapping Architectural Program Documents

quick update/teaser

I apologize for the complete lack of activity here in the last month, as thesis and other projects are currently kicking me silly. I promise a barrage of updates as each project comes to term in the next few weeks. For now, here’s a sneak peek at one of the many little things I’m working on at the moment – evenly spaced interactive tensor field hyperstreamlines. Woot.
Image

quick update/teaser

Yet More Applets – now with springs!

ImageImage
It should be clear to pretty much everyone on the planet by now that i’m a big fan of the Processing visual programming language. Well, here is more evidence. In the process of trying out algorithms for my thesis, I made a couple of games using spring force / attractor algorithms where you can make little structures and play with catenary curves. Both games use a similar interface, the main difference is that one is 3d (don’t be fooled by that third dimension, I think the 2d version is actually more fun.)

You can find the 2d game here and the 3d game here. I’m going to make a separate applet page one of these days, promise.

The games are pretty easy to play:

  • Left click places or moves a node (click and drag to move).
  • Center click changes a node between fixed and free modes. In the 2d version center click and drag will pan the canvas.
  • Right click deletes a node (or a spring if you click at its center.)
  • Clicking an existing node and then making a new node or picking another existing node will link the two, adding nodes as necessary to make up the distance. If you delete a node it will try to keep the surrounding nodes connected (you can use this to shorten a “string” of nodes.)
  • Spacebar toggles between fixed and free node placement. C clears the board.
  • There are some other buttons to mess with constants, look at the source if you are curious (try toggling between the presets in the 2d version with the 1 and 2 keys).
  • Happy springing!

    Yet More Applets – now with springs!