Chatbot Experiments
AI chatbots are my playground. Before they are products, they are experiments -- quick, messy, iterative attempts to understand what conversational AI can do and where its limits are.
The Fascination
The shift from coding everything manually to building with AI opened up a specific fascination: what happens when you make the AI the product itself? Not AI as a tool that helps you code, but AI as the thing the user interacts with.
Chatbots are the simplest expression of that idea. A user types something, the AI responds, and the quality of that response determines everything. No fancy UI can save a bad response, and a great response does not need a fancy UI.
Building with Claude
Claude (Anthropic) is my primary thinking partner and building tool. I use it for code generation, problem-solving, and prototyping ideas. But I have also experimented with building conversational interfaces powered by Claude -- testing how it handles different types of prompts, how it maintains context, and what happens when you push it in unexpected directions.
The experiments range from simple chatbots that answer questions about a specific topic to more complex prototypes that try to maintain personality, remember context across a conversation, and handle edge cases gracefully.
The Prototyping Process
Chatbot experiments follow a fast cycle:
- Start with a prompt -- define what the chatbot should be, how it should respond, what persona it should have
- Test it -- throw real questions at it, try to break it, see where it fails
- Iterate on the prompt -- refine the instructions, add constraints, improve the output
- Decide if it is worth building further -- most experiments stay as experiments, some graduate to real features
This cycle can happen in an hour. That speed is what makes chatbot experiments so addictive -- the feedback loop between idea and working prototype is almost instant with modern AI tools.
What I Have Learned
Prompt Engineering is Real
The difference between a good chatbot and a bad one is almost entirely in the system prompt. The same model can be brilliant or useless depending on how you direct it. This connects directly to the AI + Frnds thesis: the new skill is not coding, it is directing.
Context is Everything
Short conversations are easy. Long conversations -- where the chatbot needs to remember what was said ten messages ago and build on it -- are where things get hard. Context window management is a real engineering challenge, not just a prompt problem.
Personality is Harder Than Knowledge
Making a chatbot that knows things is straightforward. Making a chatbot that has a consistent personality, that feels like talking to a real entity rather than a search engine, is much harder. The uncanny valley of chatbot personality is real.
Connection to the Bigger Picture
These experiments are not random. They feed into my understanding of what AI can do, which feeds into AI + Frnds events, which feeds into the community, which feeds into the network that supports all my ventures. Understanding AI at the experimentation level -- not just the usage level -- makes me a better builder and a better teacher.
The chatbot experiments also connect to a broader interest in emotional tech -- technology that understands and responds to how people feel, not just what they say. That thread runs through several of my projects and is something the co/Build community has explored.