Peter Christensen

  • Presentations

First Impressions of Vibe Coding

June 4, 2025 by Peter Leave a Comment

It seems there’s an unwritten rule that every tech article must mention AI (and yes, I’m dutifully complying!). Since ChatGPT’s launch two and a half years ago, its extraordinary capabilities and breakneck pace of development have captured everyone’s imagination. I’ve given up trying to track every new model, API, and AI-enabled tool—I can barely keep up with Simonw’s blog posts!

Perhaps the most annoying part of this AI boom is “vibe coding”, or rather, the disagreement about what the term means and what its implications are for the industry. Is it enabling non-developers to bring their idea to reality, or is it allowing lazy developers to push buggy, insecure, unmaintainable code into production faster? Is it careless, or a powerful tool that handles the drudgeries of software development so multiple the expert judgement of a senior engineer?

Who cares!

My concern about AI in general and especially for coding has always been unpredictability and hallucinations. Most of my effort when writing code goes toward eliminating unpredictability and uncertainty from the programs I write, so introducing unpredictability at scale seemed … risky. I could see the potential in these tools, but I couldn’t reconcile how to fit it into my own coding experience. Enter Harper Reed.

I trust this man implicitly.

Harper is a friend and all around cool guy, and he has shared some great technical insight and leadership over the years. He started writing about his experience using AI tools recently (starting with My LLM codegen workflow atm). I have a lot more to write about his essays on another day, but for now, it is enough to know that he outlines a process that addresses my concerns, and his endorsement did more than 100,000 blogs and tweets to convince me to take the plunge.

I wanted to see the idea refinement and spec generation in action, so I took an irritating issue from work and fed it into Harper’s process. I took the idea, asked ChatGPT to help me refine the idea step by step, then turn it into a spec, then a prompt plan.

If you want to see it in more detail, here is a repo with the prompts and code that was generated. The progression goes:

  • spec_prompt.md – A combination of Harper’s prompt plus my idea to refine.
  • spec_conversation.md – The text of the conversation with the questions ChatGPT asked me to refine the idea, and my responses. This file isn’t used, but I was fascinated by the process so I wanted to save it.
  • spec.md – This was the spec generated by ChatGPT based on the previous conversation.
  • prompt_plan_conversation.md – Harper’s prompt for turning a spec into a series of actionable prompts and the text from spec.md.
  • prompt_plan.md – The series of prompts generated by the previous conversation.
  • todo.md – I’m not sure exactly how this was used – I thought that executing the prompt plan was going to mark these items as completed, but instead they were marked completed on the prompt_plan.

That was all I intended to do, but I was so impressed with the output that I kept going and fed the prompt plan into Claude Code and let it run. Forty five minutes and $9.56 in Claude API credits later, I had a script that anyone in my company could have written, but it would probably never have been a high enough priority to be worth doing. The code didn’t look at all like how I would have written it, but it is also way more robust and feature rich than I would have bothered making it – all it required from me was answering some simple questions about requirements and a couple bucks worth of tokens.

This was definitely vide coded – I only skimmed the spec, I didn’t read the prompt plan at all, I only read the first code for the first 5 milestones of the code generation, and I only made 2 small interventions (saying that I wanted a file regex to only match filenames in all caps, and asking for the update to prompt_plan to be included in each commit). Other than that, I accepted everything uncritically.

Why the change of heart? First, this project is easily reversible and totally out of band of anything important happening, so it is completely risk-free. Second, it was really easy to run the generated script and verify that it did what I expected. And finally, I was so impressed by the thoughtfulness of the refining questions and the requirements it drove, and watching what code and tests Claude Code was writing that it addressed my nervousness about this not so risky project.

Would I vibe code so haphazardly again? Only if I felt comfortable with the first two points above. Would I use this process again? Absolutely! The thing I like about working this way is that there are many points where I could step in and verify and de-risk. I could review the spec, the generated milestones, every commit, etc, and I could get additional help and opinions from teammates as well as other AI tools (I’m curious what e.g. Claude would think of OpenAI’s generated spec).

I certainly understand why people so obsessed and hyperbolic about what these AI tools can do. The difference now is that I feel empowered to use these tools without worrying that it would lead me astray down a confusing path.

I’ve had lots of thoughts about AI before and since, but now that I pulled the trigger and started using it rather than just reading about it, there’s a lot more I want to write about, so there will be more to come.

Filed Under: Programming Tagged With: ai, chatgpt, claude, harper, openai, vibecoding

Categories

  • Blog
  • Book Review
  • Business
  • Clojure
  • Education
  • Emacs
  • Fun
  • iOS
  • Lisp
  • Personal Sprints
  • Pictures
  • Polyphasic
  • Presentations
  • Programming
  • Rails
  • Startups
  • Uncategorized

Copyright © 2025 · Minimum Pro Theme on Genesis Framework · WordPress · Log in