The act of unfolding, expansion, or development.
This blog explores my thoughts, experiments, and observations on software development—both in process and technique—as well as on the building of development teams in the era of rapidly evolving AI-assisted coding tools.
This blog explores my thoughts, experiments, and observations on software development—both in process and technique—as well as on the building of development teams in the era of rapidly evolving AI-assisted coding tools.
Like every technological inflection point, the emergence of AI in software engineering has triggered a scramble to exploit new advantages. Equally, it has made it urgent to learn the disciplines and practical constraints necessary to avoid subtle traps and long-term downsides.
Large Language Models (LLMs) fundamentally changed the landscape of computing when transformer architectures escaped the lab and reached the public through services like OpenAI’s ChatGPT, followed swiftly by competing systems. Although the underlying architectures had been in research for years, their leap across the threshold of practical utility came only with massive up-scaling of model size and unprecedented investment in training.
Within just a few years, these models began demonstrating remarkable aptitude in software generation and testing. We now hear claims that AI will soon manage the entire software lifecycle—yet practical experience suggests we’re not quite at the stage of fully autonomous software genesis and evolution. For now, product managers still need to articulate priorities and requirements in human terms before AI can execute them.
Even so, software creation has been irrevocably altered—both in how software is built and how users interact with it. While natural language may not always be the most efficient interface, it is one humans are innately adapted to. Language serves as a flexible foundation for multimodal interaction, augmented by gestures, sketches, diagrams, and other high-bandwidth visual modes. Historically, our user interfaces have reduced the richness of human intent into narrow, prescriptive metaphors—icons, forms, and control surfaces designed to guide workflows with minimal friction. This “reduction” has shaped software design for decades, from text-based interfaces to GUIs of ever-increasing sophistication (notwithstanding the occasional regressions imposed by prevailing architectures or corporate ecosystems).
Until recently, software systems were almost entirely prescriptive: they defined fixed workflows and constrained users to operate within them. True flexibility—configuring or sequencing functionality to match local needs—was rare and costly. That rigidity is now giving way to agentic computing. We are witnessing the decomposition of software not just internally (for modular development), but externally—in how users can access and compose functionality directly. Functions are becoming unglued from monolithic UIs and workflows. Default experiences still exist, but the underlying operations can increasingly be orchestrated as discrete “atoms,” assembled into bespoke workflows that best fit an individual’s intent. The new glue is agentic AI: LLMs that can take natural language descriptions (plus supporting artifacts) and orchestrate the system’s capabilities toward a concrete goal.
This conversational mode introduces iterative, collaborative problem-solving—users can explore, refine, and learn, even when they lack the expertise to script or program explicitly. In essence, functionality is being turned inside-out: exposed as composable building blocks rather than pre-ordained sequences.
This idea has precedents. AppleScript and similar frameworks long offered programmatic access to application internals, allowing orchestration across multiple tools. The difference today is that users no longer need to be programmers. Expressing intent in ordinary language is sufficient; the LLM handles the orchestration behind the scenes. Similarly, the dream of remote interoperability—connecting software components across the web—has existed since the earliest service-oriented architectures. Progress has been hampered by reliability, inconsistent type systems, and semantic impedance mismatches. LLMs now provide a potential remedy: dynamic “adapters” that can resolve mismatches on the fly, given sufficient descriptions of the components involved and mechanisms for testing and correction.
In this sense, LLMs act as dynamic, intelligent glue—assembling components that might remain bound for years or just for moments. The most adaptable software architectures of the future will therefore be those that deliberately decouple concerns, describe interfaces richly, and expose semantic context that an AI can understand and manipulate. One might think of this as LLM-readable documentation—machine-interpretable type and behavior metadata that enables flexible orchestration.
Commentators already speculate that agentic AI will erode the walled gardens of SaaS, replacing them with on-demand compositions of functionality. Further ahead, some imagine the near-instantaneous creation of almost any software system that can be coherently described. Yet challenges around stability, change management, and invariants will persist. LLMs will need to maintain awareness of product requirements, security, compliance, and backward compatibility—protecting users from both ambiguity and contradiction in their own requests.
Advances in memory, reasoning, and latent-space semantics may push us closer to that horizon, but the road remains long. For now, those eager to harness this technology’s power must tread carefully—balancing ambition with understanding, and enthusiasm with disciplined practice. This blog aims to contribute to that journey: documenting technical truths, lessons learned, and the practical wisdom emerging from experience at the edge of agentic software development.