observation: the future of building software
I saw the interview from a16z with the creator of OpenClaw, Peter Steinberger. Peter is very enthusiastic about the future of agentic and how it will be deeply integrated with human lives. Several main things he shared:
- We will have an agent that acts on our behalf, and they will talk with another agent. Essentially, bot-to-bot interactions. E.g. Our bot talking with a restaurant bot to order food or book a reservation.
- Agents will not only aid software engineer in solving programming problems but personal problems as well.
- UI = complexity. Having a personal agent removes that and allows us to interact with software in a more natural way -> "80% of apps will disappear", and maybe the surviving apps will have sensors that agents don't. He also prefers CLI > MCP interface.
His build approach:
- OpenClaw wouldn't be possible without Codex; it was insightful to learn that this was created entirely with Codex.
- Multiple parallel checkouts > worktree
- No need to look to at code, only discusses with his agent, and only in some "gnarly" cases that he does.
We used to hunt the bug down in the codebase with our bare eyes pic.twitter.com/1ugo192xHI
— Angie Jones (@techgirl1908) February 9, 2026
Peter is an engineer with numerous years of experience working in the software industry, now coming back from retirement to build software for the community.
As someone who has more experience, they have the skills to understand the technical trade-offs, architectural design, allowing them to think it through and make better decisions. For entry software engineers, they lack the depth that allows them to make decisions. This is an inherent problem because this fosters the concept of "vibe-coding" that one can engineer and build software by copy and paste -- or a feedback loop back at agents, without thinking about the Why. Engineering is learning about systems and design. But how can one learn in a market where shipping to production is expected to be less than a week?
I guess one can measure the ability of problem solving of someone by comparing token usage and the impact of the software.
In the video, he also mentioned how our standards would raise every time a new model comes out. No, the model from last month doesn't get less intelligent, it is that we adapt to the new models and raise the expectations. This makes me ponder about human nature, and its want for the better as soon as a better thing comes out. When will the appreciation for things be highlighted more often; and the impact from the desire; and the understanding from past models rather than just replacing.