$ cat ./blog/mastering-ai-first-applications.md

AI-native Applications: A New Era of Intuitive Software

--ai --software-design --ux --llm

AI-native applications are fundamentally rethinking what software can do when it has the ability to learn, adapt, and interact more naturally with humans.

Rethinking What “Software” Even Is

When I talk about AI-native applications, I’m not talking about slapping an “AI” label on an old product and calling it a day. To me, AI-native applications are designed from the ground up to harness the power of artificial intelligence. They’re not just adding AI as a feature. They’re fundamentally rethinking what software can do when it has the ability to learn, adapt, and interact more naturally with humans.

I think about it a little bit like when we switched from desktop applications to mobile applications, because that shift is such a useful analogy. The very first mobile applications tried to replicate what we did on desktops, and that did not work very well, because everything was designed for a mouse and a keyboard rather than a tactile touch and swipe and drag kind of experience. After a while, we learned what designing a good mobile application meant, and after that things got better, because people finally leaned into what phones were actually good at instead of pretending they were tiny PCs. AI applications follow a similar pattern, where we first tried to add AI to existing applications and just bolt it on as augmentation, when instead we should start designing new styles of applications based on what AI is actually capable of.

So for me, AI-native means you start with the assumption that the system can understand language, that it can learn from patterns, that it can adapt as it goes, and you let that reshape the interface, the workflow, and even the business logic. We’re not just sprinkling intelligence on top of buttons and forms; we are asking different questions about what the software is for, how it should behave, and what kind of relationship it can have with a human being using it. That shift is where the interesting stuff begins, and it’s where so many of the old assumptions about software suddenly feel very small.

Bridging Two Very Different Worlds

One of the key impact areas that AI will have is tied to a tension that has been there since the beginning of computing: there’s always been this disconnect between how computers operate and how human beings see the world. Computers are very black and white, zero and one, yes or no, click on an exact coordinate with an X and Y property. They’re very binary, structured thinking, and everything about traditional software reflects that at some level, from menu structures to error messages that pretend ambiguity doesn’t exist.

Human beings, on the other hand, we’re messy. We howl at the moon. We smell pheromones. We have emotions that we don’t understand. We meditate, we dream, we suffer. We live life as human beings that are so faulty and colorful and flexible and indefinable at times, and then we sit down in front of a machine that wants us to compress all of that weirdness into a sequence of perfectly structured clicks and keystrokes. Computers and human beings have such fundamentally different natures that connecting them takes skill. It takes skill and cognitive overhead for the human being trying to interact with a computer, which is why so much software ends up feeling like work even when the task itself isn’t that complicated.

Now, LLMs are a large part of a bridge between those two realities, because they can understand the messy nature of human communication. They can translate that into computer actions and reflect computer actions back in human language, which suddenly changes who has to carry the burden of understanding whom. So it’s a bridge builder. It allows completely new interactions we haven’t had before, where instead of users having to learn the rigid language of software, the software meets them closer to the language and fuzziness of their actual lives.

When AI-native applications take that bridge seriously, they don’t just make a chatbot that sits beside the same old interface; they reduce that cognitive friction, making technology feel more intuitive and responsive. Because they learn from patterns and interactions over time, AI-native applications can become more personalized and efficient than traditional software that follows fixed rules, and that gradual adaptation is what makes them feel less like tools and more like collaborators.

From “Press 1 for Frustration” to Conversations That Actually Help

A very concrete example where this shift is already visible is customer support, because almost everyone knows what it feels like when it goes wrong. One of the most obvious impact areas is chat agents that can ask questions, follow ups, and converse with an end customer without needing a human being for most of the basic questions. That is a win for both sides when it’s done well, because people can just talk to it like a human being, and companies don’t need to put a person on every single “I forgot my password” or “where is my order” interaction.

Previously, either you had an automated system where it’s like press one for this option, press two for this other option, which is very frustrating and limiting and doesn’t allow you to express yourself as a human being, or you would be pulled and hauled in a queue to wait to talk to a human being and that could take a little while. Even then you don’t know if you and that human being will communicate well with each other, which means you could wait twenty minutes only to repeat yourself, get misunderstood, and leave even more annoyed than when you started. A well-trained AI using RAG and those kinds of tools on the corpus of information about your product or company can do a good job at answering most questions, and most questions are pretty simple, so there is a lot of low-hanging fruit there.

At the same time, there are limits, and when you reach that limit, people wish they were talking to a human, which is a very human reaction in itself. Having that escape hatch of saying like, hey, I’m not able to engage and satisfy this person’s demands anymore, let me escalate this to one of our help desk agents and go to a human interaction, is important to make sure it keeps being a good experience. To me, that balance between automation and human fallback is part of what it means to design AI-native systems responsibly, because it’s not about replacing people everywhere, it’s about being honest about where the technology is strong and where empathy, nuance, or responsibility still belong with a human being.

So when people ask if users are more satisfied now that AI can provide more natural and helpful interactions, I think it’s a mixed bag, which is fair. The potential is absolutely there, and in many cases it is better than pressing digits into a void, but it lives or dies on whether the system is genuinely well-trained on relevant information and whether that escape hatch exists when it should.

Language as the Operating System

One of the most mind-blowing aspects of AI-native applications, to me, is how they use language not just as an interface for humans but as a connective tissue inside complex systems. I remember Google Robotics 2.0, their research lab, found that when a robot with a camera and sensors encountered a scene it didn’t know how to solve, the best way to resolve that and find an approach was to express what they were seeing in human language and then have another LLM take that description and use that to build what a solution could look like that could be executed.

So you have a robot looking at the world, effectively saying something like, “I see this object blocked by that obstacle,” and instead of trying to crunch that only in matrices and coordinates, it passes the baton in words to another system that reasons over that description. Absolutely mind-blowing that those kind of applications exist, and it’s all in the fuzziness of human language and the extensiveness of the works that we’ve indexed and processed that these magical moments can happen, because the language itself becomes a sort of universal adapter between perception, reasoning, and action.

That flexibility is a hallmark of AI-native design, where you accept that human language is messy and ambiguous, but you don’t see that as a bug to be eliminated; you see it as a feature to be harnessed. When AI systems start talking to each other in ways that look like how we talk to each other, the boundaries between “user interface,” “backend,” and “logic” feel less rigid, and entirely new application patterns come into view. We’re only beginning to figure out what these kind of applications look like, and that’s exciting, because every time we think we’ve seen the coolest example, another one pops up that stretches our imagination a little more.

I feel that same thing in smaller, more personal tools too. The application I’m using right now for you to interview me, which is fully AI-driven and has research agents and agents that look for tensions and gaps and new trends, is an incredibly well-designed AI application that turns ramblings and thoughts and texts and long speeches into a cohesive structural narrative, still using my tone of voice, but well-written, well-structured, and well-flowed. I don’t have to be nervous anymore about how to structure sentences, and I can just focus on what I want to say, which is an absolutely wonderful experience, because again it is reducing cognitive overhead rather than adding more.

Misconceptions, Hype, and the Question of “Good”

With all this potential, it’s no surprise that AI-native applications are surrounded by a lot of noise, and one of the biggest misconceptions I see is tied to this weird split between overestimation and underestimation. People overestimate what AI can do tremendously, or they underestimate what AI can do tremendously, and we see that in very concrete ways in which AI is being applied. Y Combinator looks at the amount of successful startups over time pre and post AI, and it actually went down a little bit, which is not the story people sometimes expect when they think every new founder with a model key will change the world.

Being able to vibe code stuff did not make everyone geniuses, and it does not mean that every brain fart they came up with that had AI as a label is a successful idea. That’s one of the things where we definitely have to make some adjustments, because the presence of AI does not magically compensate for a lack of product thinking, understanding of users, or genuine problem-solution fit. Just because something says AI doesn’t mean it’s good. We still have to figure out exactly what good looks like in AI, and I don’t think we should shortcut that process with hype or fear.

At the same time, there are many applications where if you started with a human text language first approach, you could build completely new systems that are so much more pleasant and ideal to use, and those often get underestimated. The disconnect between how computers operate and how human beings see the world is huge, but LLMs serve as a bridge by understanding the messy nature of human communication and giving us a new medium to explore. We’re just scratching the surface of what AI-native applications can be, and I find that incredibly energizing, because the real work now is less about proving the technology “works” and more about shaping it into experiences that respect human complexity instead of flattening it.

To me, AI-native applications are an opportunity to finally stop forcing people to think like machines and to explore what happens when the software, for once, takes on more of the burden of understanding us.