Thesis Presentation : Pt. 3: Stuff I Made, Next Steps

Given the rather diffuse nature of the previous presentation, I chose to do three small projects to explore and illustrate applications of data visualization and game interfaces in design environments. The first project was a fairly dry implementation of a common data visualization method — treemapping — to a common source of architectural data — program documentation:

Image

I blogged about this project previously so I won’t go into too much detail. The project did help to explore the tension between having a tool that does not unnecessarily suggest formal solutions, and having a tool that is formal/spatial enough to be used in a design methodology. It was also a good way to come to terms with the nature of program diagrams, and the various competing concerns that exist in the initial program explorations – assumptions about the most important relationships within the program have enormous impact upon the underlying structure of a design project.

The next project presented has also been presented previously on this blog:

Image

Due to its popularity in the last year, I felt I had to do at least a simple exploration of game-like interfaces using spring algorithms. The project brought with it some useful insights. The first and most simple is that interface strategies have associations – the side-scrolling 2d version of the applet was regarded by users as a toy while a 3d isometric version was treated as a serious tool, regardless of the fact that they had identical interfaces and functionality. The development of the applets also provided valuable experience in difficulty and payoff involved with live, additive interfaces. Both of these characteristics I feel are vital to promoting creative exploration, but require both strict attention to the runtime costs and also the development of a natural and quick interface, with no lengthy searches for commands or tools. Finally, research into developing a web-based multiuser version of the tool has convinced me (as it has some other folks) that WebGL, WebSockets, and the Canvas tools in HTML5 are going to be the new power tools in creative software development. Watch that space, people.

Image

My final (main) software project attempted to take all of the ideas above, alongside some additional computational strategies, and make a usable tool. The base algorithm I chose to work with was tensor field streamline integration. This is a technique that has been used in MRIs, structural analysis, and wind flow simulation, but my plans for use are more similar to what has been done in computer modeling and graphics. The procedural generation of tensor fields is something that is now commonplace to remesh 3d surfaces or make painterly or sketchy effects on raster images (think photoshop filters). This method of pattern generation is simultaneously powerful, requiring a minimum of input to a maximum of effect, while remaining intuitive and predictable, even as it increases in complexity. It is also possible to maintain a real-time editing environment, which was vital. This algorithm has a myriad of possibilities in architecture, from surface discretization to street map generation, to pattern generation. I chose to make a Nolli Map generator, as it is entertaining and easily understood, and also gave me the chance to play with displacement mapping and pixel shaders. As can be seen from the image above, the project used a pipeline approach where user input is first used to generate a bidirectional tensor field. The input comes in the forms of lines and singularities, with the option to pick an extent of affect.

Image

Further input can be provided in the form of a bitmap (also with a range of effect) which is then put into the quadrangles created by the initial tensor fields. Multiple bitmaps are combined in an additive fashion using a pixel shader if more than one is provided.

Image

Finally, a three-dimensional view uses displacement mapping to show the spatial implications of the drawn black and white map.

Image

The project included peer-to-peer functionality that allowed for multiple users to work simultaneously on the same map, with live updating — this is a particularly interesting feature of environments that use implicit rules, as there is no need to “lock” individual elements to preserve relationships. There is also a notation function and the ability to export to dxf.

My takeoff from all of the above is twofold. First of all, I am convinced that we are about to see a revolution in lightweight, interactive digital design environments that incorporate rich feedback and the possibility of implicit algorithmic integration. I am likewise certain that the future of digital design will lie in the ability of designers to navigate between multiple tools and methods, which will make knowledge of the underlying mathematical and programming techniques all the more important. Architects must make good use of these new quantitative techniques, not only as a way to maintain agency and relevancy, but also because it will ground our nascent computational design culture and improve the status of the built world.

I’ve taken the last six weeks off to get a breather, move to the other side of the country, obtain a job, and secure a mortgage. I’m about 80% percent of the way there, and once those goals are complete I’m going to have to take a long hard look at where I want to take this research. First things first, I’m going to release some of the tensor field code — I’ve got a grasshopper c# version that needs to be cleaned up and packaged for general consumption. I’ll consider releasing the larger tensor fields project but honestly it’s such a mess I’m afraid to do it (the important bits of code are in the appendix of my thesis document which you can find here.) After all of that is done, I’m kind of at a loss. Revit API work? Architectural data visualization? Mobile apps? It’s a wide world out there. I’d better start taking some big steps.

Thesis Presentation : Pt. 3: Stuff I Made, Next Steps

Thesis Presentation Pt. 2: The Monolith vs The Ecosystem (Heuristics, Feedback, and Implicit Rules)

I left off the last post with a suggestion to find a way to leverage the methods of game environments to make richer, more sophisticated design environments. If one was looking for a real-world example of this dynamic in action, it would be harder to find one more perfect than FoldIt. Designed by a multidisciplinary team at the University of Washington dubbed the “UW Center for Game Science,” FoldIt was created to help solve one of the most complex problems in biology- predicting the folded structure of proteins. Properly predicting the forms of amino acid chains could potentially enable not only a better understanding of cell biology, but also the design or discovery of powerful new drugs. Predicting the folded structure involves finding the “lowest energy” solution for a particular chain – the best fit structure where hydrophobic parts of the protein are hidden and hydrophilic portions exposed. Unfortunately, the biomechanics are still only partially understood, and thus algorithmic solutions are incomplete and often get stuck in local minima. Often these solution to the sticking points were obvious, even to people with little understanding of the problem. The UW team, seeing this, decided that instead of additional computing power they would harvest the enormous visual and spatial processing abilities of human games by designing a competitive game based around finding the proper folded structure of proteins. This game would act as a kind of talent search for protein folding prodigies. Within weeks of release, their site had registered hundreds of thousands of downloads, and a devoted user base. Within a year, their user teams had derived solutions that beat even the best algorithmic models. One of the best teams was headed up by a thirteen year old, who claimed that he found solutions because “they just look right.” In educational parlance, this is referred to as perceptual learning — the brain’s ability to model a complex system intuitively.

Image

This strategy was not successful merely because people are great at these tasks — it was successful because human ability was married to an interface that managed to hit on all cylinders, enabling real solutions to real problems by people unfamiliar with the context of the problem, simply by providing clear feedback, an intuitive interface, and good tools. Some of the most important aspects of this interface have been called out above. FoldIt incorporates a wealth of visualization techniques, such as motion, glyphs, ghosting, and even auditory feedback to give a clear indication of the state of the game. Clear nonspatial feedback is given with a numerical score and a score history (with, of course, an all-important “undo” function). While many of the moves are done manually by “grabbing” protein chains, there are limited stochastic search (“wiggle”) tools that allow for a player to find slightly better solutions in the neighborhood of their current position. There is also a sophisticated help and user forum interface built within the game, as well as multiple social tools such as a leader board and chat functionality, that all combine to produce a competitive but friendly community of engaged users.

Image

The work of the Boston-based firm Utile has shows a sensitivity to interaction design and the incorporation of heuristics into design strategies in the service of seemingly everyday issues of profit, space planning, zoning, and sustainability, that warrants an extended look. “Heuristics” in this case represent both architectural rules of thumb (such as 30-foot column grids) as well as more sophisticated rules based upon related fields, such as development pro-formas or zoning codes. They have developed sophisticated internal tools to help provide rapid feedback on design decisions, often within client meetings, to help ensure that collaborative design decisions are grounded in objective reality. For developer clients, they have integrated a simple pro-forma tool within the schematic design modeling tools in Revit, that allow for an immediate comparison of the monetary value different general strategies for a specific parcel. They have found that this process, instead of being limiting, is ultimately liberating for their design process, as it provides a solid basis at the start that prevents a client from making drastic changes farther down the line, using efficiency as a justification.

Tim Love, a principal at Utile, has also experimented with using heuristics and feedback as didactic tools in a Yale design studio. At the outset of the studio students were provided with an “Urbanism Starter Kit” that comprised of BIM families of several building types, that incorporated many levels of heuristic devices, from a 30-foot structural grid to floor plate width rules to exit strategies. The students were made to use these digital “building blocks” at the earliest stage of design, with a specific ratio of square footages for each type. Immediate feedback from the BIM model was used to ensure that rules were being followed and enabled live tweaking of the designs. Only when these restricted designs had reached a sufficient level of performance were the students allowed to modify the structures themselves. Despite severely limiting the freedom of the designs at the most conceptual, schematic phase, the final output of the studio displayed a range and variety that belied its origin. In addition, this limited beginning ensured that the projects maintained a certain level of performance as an urban agglomeration, allowing for direct comparison of different solutions as well as freeing the discussion of the projects from purely practical concerns. Finally, the act of struggling against the given constraints was in itself a valuable experience for the students, as it introduced a range of strategies to overcome the limitations of the given structures that, while common in practice, are rarely explored in academia.

Image

If one is looking for confirmation that visualization and feedback are increasingly important in design, you need look no farther than the Autodesk Labs, the R&D wing of the billion-dollar CAD behemoth that provides the vast majority of architectural software solutions in the U.S. Several recent Labs releases, most notably the Galileo and Vasari projects, are lightweight, simple schematic environments that incorporate multiple levels of analysis and feedback within the tool itself. Vasari is particularly interesting, as it is ultimately the combination of several products that Autodesk has already developed: a simplified version of Revit’s schematic design package, with the Nucleus physics engine from Maya and environmental modeling tools from Ecotect. It is, however, more than the sum of its parts. Autodesk has, first of all, made the program free, and has packaged it as a simple executable, which promotes sharing among users. They have also built in some collaborative features, such as a dedicated user forum. Finally, the incorporated analysis is geared towards quick, even real-time results, which promotes a rapid response loop in the design process that approaches a game-like environment. Particularly in the incorporation of the Nucleus physics tools, these methods start to look more like the kind of implicit rules you’d see in a video game, rather than explicit ones applied selectively, that you normally see in parametric design environments. This distinction is vital in transforming a program from a passive tool to an active symbiotic collaborator, as it allows for the kind of perceptual learning described above.

Image

Even if it is fairly certain that these feedback methods will make it into design software, the strategy for implementation is still very much up in the air. Currently there are two main strategies being used by design software companies to produce complex products. The reigning solution for the last few decades has been that of the “monolith,” an all-encompassing, proprietary, black-box program that attempts to have a feature that meets every possible user need. This worked well when the goal was 2d CAD, but newer parametric software has tended to be overburdened, buggy and confusing, as the development teams go chasing too many rabbits down too many holes. The end result of this strategy ends up being something similar to Digital Project/CATIA, with multiple “workbenches,” tiered pricing structures, baffling documentation, and a learning curve that is essentially a vertical wall. In addtion, monolithic software often ends up so large it has difficulty making transitions to take advantage of changing technology or practice. The alternative “software as ecosystem” strategy is currently exemplified in the world of design software by Rhino. The idea is very different – create a stable, well documented core program, ideally with some open source component and a fully developed API, and enable independent developers to create extensions and plug ins to extend the capabilities of the software. This leads to a more flexible, nimble platform for design, and is perhaps better suited to the end needs of a forward-thinking architecture firm, where multiple programs are used in series or parallel to achieve different goals at different times. This diagram by SHoP of their workflow reveals this central dilemma to digital practice:

Image

What you have there is easily six figures’ worth of design software, much of it having massive amounts of feature overlap. I’m not saying that a cutting-edge digital practice shouldn’t have a big toolkit, but some of the nodes in that diagram were purchased to use only a tiny portion of their core functionality — it’s like buying a Ferrari to use the radio. Obviously, a few flexible programs would suit this sort of workflow far better than a big pile of inflexible monolithic software. However, the monolithic approach can have advantages as well — it is more stable, easier to standardize, and often more predictable. Apple is essentially a monolith, with the notable exception of the app store (which, I might add, is rigidly policed). There are a lot of people (including myself) who are more interested in a stable and easily usable interface for their phone than the flexibility and hackability of an open-source, device independent communication OS such as Android. This is, I feel, the reason that there is no ecosystem-based parametric software solution as fully featured as DP or Revit — BIM implementation in offices values stability and reliable standards much more than flexibility or openness.

Schematic design environments, on the other hand, value flexibility, independence, and adaptability, which suggests that Rhino or something like it will likely be seen on more and more screens in practice in the near future. Innovations such as Kangaroo and Grasshopper have no real equivalent in anything produced by Dessault, Autodesk, or Bentley, and I feel that is unlikely to change. The primary danger I see in the incorporation of implicit rules and rich feedback in design environments is that designers might not “push back” enough, leading to uninspired and predictable results. If the incorporation of these methods was relegated to a “black box” in the software that is a far more likely outcome. However, when incorporated in a plug-in users are more likely to customize and understand the implications of the environment they are creating, leading to a richer and less constrained outcome.

Next: Pt. 3: Stuff I Made, Next Steps

Thesis Presentation Pt. 2: The Monolith vs The Ecosystem (Heuristics, Feedback, and Implicit Rules)