top of page
Search

I Forced Myself to Learn AI. It Led to Building Henry

  • Writer: Baran Korkut
    Baran Korkut
  • Jan 8
  • 8 min read

I wanted to use AI. I wasn't dismissive, I was frustrated. I'd tried the obvious applications: using LLMs for search, experimenting with various AI tools, attempting to build something meaningful. But I kept hitting subscription walls before creating anything substantial or finding that the outputs weren't better than what I could already do myself.


I could do my consulting work fine without AI. Business model frameworks didn't need automation. Client conversations required human judgment. The validation methodologies I'd spent 15 years developing worked. So why force it?


But I couldn't shake the feeling that I was missing something. If AI was going to reshape how consulting worked, I needed to find the actual use case, not the obvious ones everyone else was trying but what worked for me specifically.


The Experimental Phase: Searching for Use Cases


So, I kept experimenting. Not with AI as a better search engine or content generator, but as a thinking partner.  I read an advice to treat AI as a colleague. That stuck with me. I'd paste problems or challenges into Claude and argue with it. I'd explain my business model frameworks and see if it could apply them. I'd challenge its recommendations and watch how it adapted.


That's when things clicked. The AI didn't just quote theory from books, it could engage with my specific context, push back on my assumptions, ask clarifying questions. It wasn't replacing my expertise; it was helping me think through problems more systematically.


The value wasn't in the AI's knowledge. It was in the conversational interface for thinking through complex problems. A blank canvas tool requires you to know what to fill in. A book requires you to extract and apply principles yourself. But a conversation that adapts to your context? That was the case I'd been looking for. Also, I realized that AI is more knowledgeable, but I am more experienced. Together we could be better than each of us could be alone.


A user depicting frustrated with AI vs a user collaborating with AI
Frustration vs Collaboration

From Experiments to Agents: Building Methodology into AI


Once I understood what made AI useful, I wanted to formalize it. I'd spent 15 years developing specific approaches to business design—not just Business Model Canvas theory, but practical methods for what to focus on, what to skip, and how to sequence validation work. Could I teach those approaches to AI?


The process was surprisingly natural. I'd open a chat and just lecture—explaining to Claude how I think about founder problems, what questions I ask and why, which frameworks matter and which theoretical noise are. Then I'd ask it to compress that conversation into a prompt.


I started building custom agents using Toolhouse and giving them a nice UI  with Lovable. Each agent would embody one aspect of my consulting practice:


  • A Business Design Visualizer that conducts my "go in circles" interview methodology

  • A Validation Coach that designs experiments using my prioritization framework

  • An Assessment Specialist that evaluates business models the way I do in client engagements

The agents that emerged weren't generic business consultants. They had opinions. They pushed back. At one point, while working with the Assessment Specialist I'd built, it told me: "As Baran says, if you scale shit, you end up with scaled shit."


I laughed. The agent was quoting me back to myself—a line I'd written during prompt development. That's when I knew these weren't just ChatGPT wrappers. They were channeling my actual consulting voice and methodology.


The Dead End That Became a Pivot


I was happy building agents for my own use. The plan was to create a tool that would help me publish and manage these agents, something that would let other consultants do the same thing. The Toolhouse, Lovable combo had limitations and I couldn’t find a solution that works for me. So I started vide coding with Lovable.


Then I hit a wall. The infrastructure I needed to build was complex, and I was nowhere near having the technical chops to pull it off. I could build the agents themselves, but publishing infrastructure? That was a different problem entirely.


I had to decide: keep banging my head against the technical barriers, or acknowledge I'd built something else worth exploring. I had three working agents that embodied 15 years of business design expertise. I had founders in my network who needed this kind of thinking partner. What if instead of building a tool to publish agents, I just... launched the agents themselves?


In November 2025, I decided to treat Henry as a startup.


Building the Team I Needed


If Henry was going to be a real business, I needed help. And because this was slightly more than a side project, I couldn't hire people.


Then I read something that reframed the challenge: "We'll see solopreneurs with unicorns soon."


I didn't believe it, exactly. But it was an interesting thought experiment. Not "can I build a unicorn alone" but "what would I learn from trying?" What are the actual barriers to scaling as a solopreneur? Where does the model break? What problems can AI solve and which ones require humans?


It became a learning opportunity disguised as ambition.


The first problem was immediate: I couldn't build Henry's technical infrastructure myself. I'd tried working directly with Lovable to develop the publishing platform for my agents, and I'd hit a wall. The issue wasn't Lovable—it was me. I needed someone who understood both development strategies and the specifics of working with Lovable's constraints. Someone I could discuss architecture with, who could mentor the AI development platform and audit its outputs.


I needed a CTO. But not just a CTO, a team. Well, not exactly a team either but specific capabilities I lacked, and I could build AI agents to fill those gaps.

So I built four more:


  • MiniMe: Chief of Staff who keeps track of everything and coordinates strategy

  • CTO: Handle all technical decisions, work through Lovable to implement features

  • CCO: Manages marketing, positioning, and acquisition strategy

  • CFO: Models unit economics and pricing

  • Validator & Visualizer: Henry's own agents, used to validate Henry itself


Each has a defined role with clear boundaries. The CTO doesn't comment on marketing. The CCO doesn't make technical decisions. If there's overlap, they flag it and tell me to check with the relevant specialist.


A person working with a team of specialist AI agents
AI Team Members

The workflow is entirely manual. When I complete work with one agent, I ask it to write a handoff note for MiniMe. At the end of each day, I copy those notes to MiniMe's chat. Every morning, MiniMe gives me a daily brief showing what we're working on and what needs attention.


It's a lot of copying and pasting. The immediate reflex is to either automate immediately or abandon the structure as too complex. But I wanted to experience the orchestration problem before solving it. If the "solopreneur unicorn" model has any validity, understanding coordination complexity is essential. I needed to learn what information actually matters in handoffs, what gets lost between specialists, where the friction points are.


This is not just about the team. It’s also a challenge for Henry. Henry's end-state vision is a single conversational interface where a User Representative agent routes users to the right specialists automatically. I can't design that system without understanding it manually first.

The solopreneur unicorn probably isn't realistic for me with all the other things happening in my professional life. But the learnings from trying to build toward it are real. I'm discovering exactly where AI extends a solo founder's capability and where it doesn't. Where coordination overhead kills efficiency. Where human judgment still matters.


That knowledge is worth more than the unicorn fantasy.


What I'm Actually Learning


Running this multi-agent team has taught me more about LLM capabilities and limitations than any amount of casual experimentation could have.


The agents are excellent at staying in character. Once you define their role and boundaries clearly, they don't drift. The CFO doesn't suddenly start giving product advice. The CTO doesn't comment on pricing strategy. That consistency is what makes the structure work.


But they have no shared memory. Each chat is isolated. The CTO doesn't know what the CCO discussed unless I explicitly tell it. This isn't a bug—it's a feature for keeping specialists focused. But it means I'm the router, and that manual orchestration reveals exactly what a central coordination system needs to handle.


They're better than me at remembering my own methodology. When I'm tired or distracted, I skip steps or make assumptions. The agents don't. They follow the frameworks I taught them with perfect consistency. That's simultaneously useful and humbling—they're more rigorous versions of myself.


And they expose when my thinking is fuzzy. If I can't explain something clearly enough for the agent to act on it, that's feedback about my own clarity. The act of teaching AI has made me better at articulating what I actually know versus what I think I know.


Most importantly, AI hasn't changed the fundamentals of business design—it's just compressed the timelines. I wrote about this in How AI Business Design Really Works: the validation methodology is still the same (talk to customers, test assumptions, gather evidence), but what used to take months now takes weeks. The Validator didn't invent new principles—it helped me apply existing ones more rigorously and faster. That's the actual value, not the hype about AI revolutionizing everything.


The Next Technical Wall


Building Henry with this AI team has worked well enough to get four agents live in production. But now I'm facing the next frontier.


I'm designing the fifth agent, the Context Detective, which will do market research and competitive analysis. And I've realized I can't rely on a single LLM anymore. Perplexity is excellent for research. Claude is great for conversational flow and making sense of things. ChatGPT handles visualization well. Gemini is still unexplored territory. Plus there are free LLMs available through Groq that could reduce costs significantly.


But Henry's entire infrastructure is built for Anthropic only. Using multiple LLMs means either rebuilding the platform or learning n8n to orchestrate multi-LLM workflows—neither of which I know how to do yet.


So I'm back where I started: facing a technical challenge I don't have the skills to solve immediately, which means more learning ahead. Maybe even a complete product overhaul just to build the fifth agent.


The difference is this time I know that the learning is worth it. Because I've already proven—to myself, with actual use—that these agents deliver value I can't get any other way.


What This Actually Is


Henry is a byproduct of me trying to find AI use cases for my own work. It became an experiment in building AI products. Now it's an experiment in AI-augmented venture building, using the agents I built to validate whether those agents should exist as a business.


A user creating a recursive loop working with AI agents
Coordinating agents to build agents

I'm not sure yet if Henry will work as a standalone product. The business model has question marks. Distribution is unclear. We're at ~10 users, most from warm network, and a 29% activation rate that needs fixing.


But I know the methodology works because I'm using it on itself. The Visualizer mapped Henry's business model and caught gaps I would have missed. The Validator designed experiments that gave me clear answers in two weeks instead of six months of guessing. The agents quoted my own frameworks back to me at moments when I needed to hear them.


That's not hype. That's just what happened when I forced myself to use the thing I built.

The journey continues. More technical walls ahead. More learning is required. But at least now I understand what I'm building toward, and why it might matter.


logo gethenry.app
Meet Henry

Ready to try Henry yourself? The agents are live at gethenry.app. Start with the Business Design Visualizer if you want to map your model, or the Validation Coach if you're ready to test assumptions.

 
 
 

Comments


bottom of page