Anas Alghamdi is an AI Development Lead and Architectural Designer at Omrania in Riyadh. He works on fitting AI and automation into architectural workflows: finding where the repetitive, low-judgment tasks can be handed off to software so designers can spend their time on work that actually requires thinking. That means site analysis, process design, and building AI-assisted pipelines that do their job without needing constant supervision.
Which AI tool made the biggest early impact on your concept-phase speed, and what key setup drove that gain?
Four tools reshaped how I work, and they each hit differently.
Nano Banana 2 was the most visible change at the concept phase. Being able to describe a design direction in natural language and get high-quality, editable images back in seconds collapsed the gap between an idea and something a client or team can actually react to. The consistency across multiple outputs mattered a lot for presentation work, where you need visual coherence across a set rather than one impressive standalone image.
Google AI Studio and Claude changed how I handle the thinking work. Research, brief analysis, documentation review, drafting technical responses, and processing large amounts of project information quickly. Work that used to sit on a to-do list for days because it was tedious, not difficult, now gets done in the same session it comes up.
n8n is where the compounding happens. It connects everything. Once you start automating the handoffs between tools, between stages, between team members, the individual tools get significantly more powerful because they stop operating in isolation. The setup investment is real, but what you get on the other side is a workflow that runs without you babysitting it.
What drove the gain across all of them was the same thing: being specific. Vague inputs produce vague outputs regardless of the tool. The teams and individuals getting the most out of these are the ones who treat prompts and workflow logic the way they would treat a well-written brief.
In mega-projects, where does AI handle iteration best versus where humans still dominate?
AI handles volume. When you have a master plan with forty residential clusters and the client wants to see three massing options for each, AI can generate variation sets faster than any team. It is also strong at processing information: site data, solar studies, zoning overlays, and patterns you might otherwise catch only after weeks of manual review.
Where humans still dominate is judgment under ambiguity. Mega-projects in this region carry layers of cultural expectation, political sensitivity, and client relationship dynamics that do not live in any dataset. When a design decision falls between what is technically optimal and what the client will actually accept, that call requires context AI does not have. It also struggles when the brief itself is unclear, and in large developments, the brief is almost always partially unclear. You need someone who can read what is not being said and design toward it.
The honest answer is that AI is a strong junior. It produces volume, handles repetition, and flags things you might miss. But it needs direction, and it needs someone checking its work.
Facing mid-project code changes, how does AI adapt faster than manual updates?
The value here is less about speed and more about coverage. When a code change comes through mid-project, the manual process is essentially a hunt through documentation, looking for everywhere that change has an implication. People miss things, especially under deadline pressure.
What I have found useful is using AI to cross-reference the updated requirement against the existing documentation set and flag every potential conflict in one pass. It does not resolve the conflict. That still requires an architect to make a call. But it compresses the audit from days to hours and removes the risk of something slipping through because someone was tired or rushed. In large projects with complex documentation, that coverage alone justifies the setup cost.
What unexpected bottleneck did AI expose in traditional pipelines, and how did you reroute it?
The bottleneck it exposed was not in production. It was in decisions. When you automate the repetitive output tasks, it becomes visible how much time is actually lost waiting for approvals, waiting for briefs to be finalized, waiting for feedback that should have come two stages earlier. AI did not create that problem, but it made it impossible to ignore because the production work was no longer absorbing the slack.
The fix was partly structural: earlier alignment sessions, clearer briefs before production starts, and shorter review loops. The AI tools accelerated output, but the real gain came from fixing the decision-making process around them. If you automate a broken pipeline, you just get to the bottleneck faster.
AI keeps changing fast. How do you decide what to adopt and what to ignore, and what is the actual goal?
My filter is simple: does this remove something I should not be spending time on? There is no shortage of new tools. Something launches every week, and someone in the industry is excited about all of them. I am not interested in adopting technology for its own sake.
What I care about is protecting design time. The work that genuinely requires an architect, understanding a place, reading a client, making a spatial judgment, solving a problem that has no clean precedent, does not scal,e and should not be rushed. But a significant portion of what fills an architect’s week is not that work. It is documentation, coordination, repetitive drawing production, formatting reports, and processing information that should have been processed already. That is where AI belongs.
When I evaluate a new tool, the question is whether it takes something off the architect’s plate that was never really architecture in the first place. If yes, I pay attention. If it is promising to do the design thinking for me, I am skeptical. Not because it cannot generate something interesting, but because removing that thinking is not a gain. It is the job.
Where do you see AI reshaping the architect’s role most disruptively in high-stakes developments over the next 5 years?
The role that changes most is not the designer. It is the coordinator. A large portion of project management in complex developments is information arbitrage: making sure the right people have the right information at the right time, tracking what has been decided, and flagging what is inconsistent. AI is going to absorb a significant share of that work, and it will do it more reliably than a junior architect working across fifteen workstreams at once.
What that unlocks, if firms respond well, is a rebalancing toward design judgment and client relationship work, the things that actually differentiate a firm. The disruption is not that AI replaces architects. It is expected that the architects who learn to direct AI effectively will cover ground that previously required three or four people, and the ones who do not will find their role increasingly defined by the tasks AI has not yet replaced. That is a slow shift, but in five years I think its shape will be clear.
The firms that will feel it most are the ones running on the assumption that the scale of the team equals the quality of output. That equation is already weakening.







