Here is one depiction of building software: start with a single line of code. Then, turn multiple lines into a function. These functions transform into components, and components get constructed into applications. In this view of the world, building software is about taking primitives of a platform and assembling them into wonderful Rube Goldberg machines.
As a software person, one of the first tasks at the inception of any new project is to break down the problem-to-be-solved into concrete requirements. In this process, assumptions are sussed out, constraints are made explicit and the solution space gets narrowed down. The constituent parts then gets matched with new or existing code. Finally, the software gets built, UI by UI, layer by layer.
This is an oversimplification, of course. That said, most the expertise and experience in software engineering is in knowing what to use, what to build, and how to do them in ways that both meets the maintainability and business1 goals. As these folks grow in seniority and tenure, they improve their ability to handle larger problem scopes and speed in resolving them down into details.
That’s the dominant mode of working today – let’s call that working forward.
That’s about to change in a profound way. As it turns out, Large Language Models (LLMs) are amazing in filling in the blanks2. They can take in very vague requests and turn out… something. As LLMs proliferate through our tools and workflows, it’s worth taking a serious look at downstream effects.
Let’s play out one example. Our protagonist wants to create an app for all the things they have to do, so, say a Todo app. They go to a LLM-powered service and types in their intent. After a short lick, the LLM generates a new, functional, unique-but-not-quite Todo app. Our protagonist clicks around this application to start using it, only to quickly realize that they want due dates on the application. They go back to this service, updates the prompt, and get a new version in respond. Then, they realize they want the Todo items to reference each other. Maybe, they want the organization to be different. So on and so forth. After multiple iterations consisting of the majority of time using and testing the application and a small slice of GPU time, the project is complete. Our protagonist moves on to other concerns.
I call this working backwards. Backwards in relation to the forward direction of building software. Backwards because the process always starts with the finished product, and then the proprietor has to probe around to refine the work such that it fulfills their needs. And backwards because this process generates a lot of “completed products” that gets discarded until our protagonist finds one that completely fulfills what they thought they wanted.
This is not historically how software is built, but there is no reason why it couldn’t be done this way. The main constraint has been the human effort to create, modify and maintain the software such that it satisfies the functional and performance requirements that are placed on software tools. Because of the cost, it makes sense to build it right as opposed to build it over and over again. With LLMs, the cost gets reduced to a tiny fraction; as a result, we would expect the process to shift to accommodate this new financial reality.
There are obvious advantages and disadvantages. The main advantage in my mind is the ability to defer and even delegate decisions during the building process. It is notoriously difficult to make all the right trade offs ahead of time without building up the understanding how each of these decisions affect the final product. With real software being built on the other end, we turn theoretical decisions into tangible outcomes, a real artifact we can probe at with sticks of different concerns.
On the disadvantage end, over time, building software will be divorced from the craft of building it. As with all knowledge gaps, the proprietor may lack the knowledge on how to shape and work with the material of software, and the common case ends up being solutions that are neither here nor there. Put differently, while software often feels like magic, they are eventually bound by physical and cost constraints. If every project is a wish cast without tangible understanding of the trade offs, then no surprise, things are not going to work.
There is also no saying if we can ever get there. The technology feels possible if we extrapolate from the state of the art3. Many companies, such as Cursor, are attempting at smaller and more focused versions of these projects, and the amount of human and financial capital invested in this is staggeringly large. Then again, technology progression is never guaranteed to be up to the right.
The current moment feels like a coin toss in mid air. Heads, we get a world of abundance enabled by cheap cognitive construction contraptions. Tails, the experiment fizzles out and the cranks keep turning.
Regardless, it is clear from the current vantage point that the way software is built will evolve. The tools to bring ideas into implementations quickly and with high fidelity are rapidly improving, while the bar for human expertise–creativity, judgement, and taste–will rise. There will be obvious challenges, such as a potential influx of low-effort solutions. But there is real upside here as well. For those of us who care about exploring difficult problem spaces and rigorously testing solutions, the ability to iterate quickly both vertically and laterally will be a boon. There will be a shift from hypotheticals into real working prototypes; we will be less shackled with cost and ability and more by our imagination; we will learn how to work backwards when sharpening our ability to discern and adjudicate. And there will be a rethinking of how we work. I hope that when we get here, we will all be the better for it.
-
I abhor the use of the word business here, but cannot find another word that encapsulates the calculus between performance, reliability, cost, outcomes, and all the things people pay other people to build software for. ↩
-
Or called hallucination. ↩
-
As of writing, the o1 model from OpenAI is the bleeding edge. ↩