No items found.
While Generative Design and Generative AI have existed for a while, it is only now that we are seeing them wrapped up and packaged in such a way that makes the technology readily accessible to the ‘no-code’ majority — which includes most architects.
Take the British Museum for example. An ever-so-slightly crooked stone court built in the 18th-century and later housed under a precision-engineered pillow of metal and glass, by Fosters and Partners in 2003. This emphatic solution was a mainstream moment for parametricism, made possible by early Generative Design software that could run a “dynamic relaxation” algorithm developed by Chris WIlliams. This algorithm settled on an optimum form and sized the individual length of each mullion accordingly.
Increasingly, we are seeing the terms Generative “Design” and “AI” used interchangeably and a wave of products are being rebranded under the heading “AI”, because it’s all the rage. Branding aside, the new toolset is so powerful that it may fundamentally change our design methods in practice, therefore we need to be really clear in our own minds on what we mean when we use these terms and decide which tools will work for us in practice.
In architecture and design, we work at two extremes simultaneously; first, we deal with the abstract idea — taking words from our clients and synthesizing them into materials and spaces. At the same time, we also deal with the finite, with absolute requirements for measurable real-world instructions; we annotate drawings with dimensions and exhaustive performance specs.
To create architecture, we need to balance precision and nuance — like two complimentary hemispheres of the same brain. An architect who is only good in one hemisphere is unlikely to consistently succeed in the profession; you have to be able to do both.
On the precision side — the technology broadly referred to as ‘Generative Design’ is relevant. Software in the Generative Design realm focuses on the quantifiable. These programs ruthlessly execute procedures based on the deterministic parameters and controls defined at the outset. They give “predictable” results that produce vectors with coordinates that can be measured in millimeters — think British Museum roof.
On the other hand, software that uses Generative AI technology applies to the qualitative dimensions of practice and tends to sit more in the fuzzy creative realm of pixels and copywriting. As Prof Neil Leach describes it; Machine Learning helped computers to get very good at correctly identifying the subject of an image, but then Ian Goodfellow made a giant leap forward in 2014 when he managed to fundamentally reverse this idea; taking a word and producing an entirely novel image. This was a computer ‘imagining’ something from nothing, an entirely novel image, born in a synthetic imagination and previously unseen by human eyes. This breakthrough (which was called Deep Dream) ultimately led to the blockbuster platforms of Chat GTP, Midjourney et al that have now emerged in 2023.
Here is a quick ‘cheat sheet’ for thinking about GenDesign vs GenAI, they are in many easy opposite phenomena:
I have begun to think that Generative Design was waiting for Generative AI to come along to really find a place in the mainstream. In recent weeks I founded a company called Arka Works that helps early adopter practices with practical experimentation on live projects and practice challenges. This has led to collaborations with AI-curious practices looking to see what can be done today.
Everyone has a different problem to solve. For example, I am speaking with practices about applying LLMs (like GPT-4) for bid-writing assistance tasks using fine-tuned models trained on previous submission material. We can read and summarise lengthy planning and regulatory reports being discussed live during meetings. Midjourney is proving incredibly powerful for early stage material palette testing and mood boards derived from a client’s written brief.
On a much larger project scale, one such example of both Gendesign & GenAI techniques being used in combination is an early-stage masterplanning project we have been working on with Mae Architects. Here we are testing massing concept ideas for a strategic housing-led masterplan with the aim of defining block types and aligning to an area and unit brief. We decided to beta-test a new tool from an Oslo Start-Up that is a bit like ‘parametric sketchup’; called Spacio.
In this platform, you work by pre-programming building characteristics such as depth, facade grids, core access and windows and then you start drawing or autogenerating whole masses very quickly with single line inputs. You are in control, you set the constraints and then adapt and refine your concept blocks by push/pulling whole facades and roofs to arrive at your target composition.
Working in this way allows you to test and validate the merit of a good or a bad idea very quickly and it also provides an immaculate record of areas at the same time — something I have always found problematic using more traditional methods.
We modeled three options for a 1500 home scheme in 2 days and then worked onto the basic massing by hand with new ideas. We sketched ideas for character zones, defensible space, public realm, varied facade grids and roof forms directly on top. Then we took this sketch and went straight to a rendering using Stable Diffusion with a feature called Control Nets which allows the AI to experiment with light and material, while being firmly constrained by your input design. The end point is a striking rendered view that was produced at a sprint.
After the render we testing daylight, sunlight potential, wind comfort and embodied carbon all using technology from this new toolset.
This single worked example demonstrates a striking new method for strategic design that feels very different. It is ‘hybrid’ — in that it combines both digital and analogue methods, it is ‘parametric’ in that we are leveraging each click to produce many more procedures and it is made ‘vivid’ because we can take sketchy linework and in combination with written prompts, explore light and materials immediately.
We came away with powerful conclusions about how to progress the design going forwards, what worked and what didn’t. The ideas we discarded were therefore indulged for the minimum amount of time, they “failed fast”. Compare this to a design idea that is kept alive for several months only to discover a fundamental technical issue that meant it was flawed from the start.
My conclusion from such experiments is that we will soon move away from linear decision-making processes where ideas can only be validated by following a series of traditional gateways in the form of QS, Fire Eng, Structures, LCA Assessor etc reporting methods that take months to conclude. Instead, you work out if an idea is good very fast and you adapt in a more agile way, designing and testing your ideas out very quickly. Then when you come to run the full engineering analysis — you are simply validating the wise decision-making you have deployed already, upstream.
We can bring the team into a huddle and do shorter energetic sprints that are focused on one key learning at a time. This new approach puts so much knowledge and insight in the hands of architects, we should feel empowered by it and excited about a new mode of practice in the future.
This article was originally published in the Architects Journal