Peter Christensen

  • Presentations

Environmental Impact of AI: Large But Not Significant

July 22, 2025 by Peter Leave a Comment

I know a lot of people are concerned about the environmental impact of AI, so I keep an eye out for detailed information about it. I came across two articles today (Slow Boring references Andy Masley):

  • Slow Boring: There’s plenty of water for data centers
  • Andy Masley: Why using ChatGPT is not bad for the environment (this is a very long summary of an even longer post – I have skimmed both of them)

Short version:

  • Water complaints from data center construction seem to be more about water quality issues from construction (e.g. well water sediment), which would be caused by constructing anything of that scale – distribution centers, warehouses, shopping centers, etc
  • Water use is much less significant than energy use
  • AI computation uses much less water and energy than other data center uses e.g. Zoom, streaming video
  • Water use by AI is vanishingly small compared to things like raising cattle or leaking pipes
  • ChatGPT queries just don’t add up to much compared to everyday energy usage tasks (e.g. 1 query = vacuum cleaner for 10 seconds, use laptop for 3 minutes) or water consumption (200,000 queries = one hamburger)

My take is that while AI energy usage will be a significant new industrial consumer, it’s not significant in either water or energy usage compared to other uses, and the impacts on capital and labor markets will be orders of magnitude more impactful than environmental impact.

Filed Under: AI Tagged With: ai, chatgpt, environment, openai

First Impressions of Vibe Coding

June 4, 2025 by Peter Leave a Comment

It seems there’s an unwritten rule that every tech article must mention AI (and yes, I’m dutifully complying!). Since ChatGPT’s launch two and a half years ago, its extraordinary capabilities and breakneck pace of development have captured everyone’s imagination. I’ve given up trying to track every new model, API, and AI-enabled tool—I can barely keep up with Simonw’s blog posts!

Perhaps the most annoying part of this AI boom is “vibe coding”, or rather, the disagreement about what the term means and what its implications are for the industry. Is it enabling non-developers to bring their idea to reality, or is it allowing lazy developers to push buggy, insecure, unmaintainable code into production faster? Is it careless, or a powerful tool that handles the drudgeries of software development so multiple the expert judgement of a senior engineer?

Who cares!

My concern about AI in general and especially for coding has always been unpredictability and hallucinations. Most of my effort when writing code goes toward eliminating unpredictability and uncertainty from the programs I write, so introducing unpredictability at scale seemed … risky. I could see the potential in these tools, but I couldn’t reconcile how to fit it into my own coding experience. Enter Harper Reed.

I trust this man implicitly.

Harper is a friend and all around cool guy, and he has shared some great technical insight and leadership over the years. He started writing about his experience using AI tools recently (starting with My LLM codegen workflow atm). I have a lot more to write about his essays on another day, but for now, it is enough to know that he outlines a process that addresses my concerns, and his endorsement did more than 100,000 blogs and tweets to convince me to take the plunge.

I wanted to see the idea refinement and spec generation in action, so I took an irritating issue from work and fed it into Harper’s process. I took the idea, asked ChatGPT to help me refine the idea step by step, then turn it into a spec, then a prompt plan.

If you want to see it in more detail, here is a repo with the prompts and code that was generated. The progression goes:

  • spec_prompt.md – A combination of Harper’s prompt plus my idea to refine.
  • spec_conversation.md – The text of the conversation with the questions ChatGPT asked me to refine the idea, and my responses. This file isn’t used, but I was fascinated by the process so I wanted to save it.
  • spec.md – This was the spec generated by ChatGPT based on the previous conversation.
  • prompt_plan_conversation.md – Harper’s prompt for turning a spec into a series of actionable prompts and the text from spec.md.
  • prompt_plan.md – The series of prompts generated by the previous conversation.
  • todo.md – I’m not sure exactly how this was used – I thought that executing the prompt plan was going to mark these items as completed, but instead they were marked completed on the prompt_plan.

That was all I intended to do, but I was so impressed with the output that I kept going and fed the prompt plan into Claude Code and let it run. Forty five minutes and $9.56 in Claude API credits later, I had a script that anyone in my company could have written, but it would probably never have been a high enough priority to be worth doing. The code didn’t look at all like how I would have written it, but it is also way more robust and feature rich than I would have bothered making it – all it required from me was answering some simple questions about requirements and a couple bucks worth of tokens.

This was definitely vide coded – I only skimmed the spec, I didn’t read the prompt plan at all, I only read the first code for the first 5 milestones of the code generation, and I only made 2 small interventions (saying that I wanted a file regex to only match filenames in all caps, and asking for the update to prompt_plan to be included in each commit). Other than that, I accepted everything uncritically.

Why the change of heart? First, this project is easily reversible and totally out of band of anything important happening, so it is completely risk-free. Second, it was really easy to run the generated script and verify that it did what I expected. And finally, I was so impressed by the thoughtfulness of the refining questions and the requirements it drove, and watching what code and tests Claude Code was writing that it addressed my nervousness about this not so risky project.

Would I vibe code so haphazardly again? Only if I felt comfortable with the first two points above. Would I use this process again? Absolutely! The thing I like about working this way is that there are many points where I could step in and verify and de-risk. I could review the spec, the generated milestones, every commit, etc, and I could get additional help and opinions from teammates as well as other AI tools (I’m curious what e.g. Claude would think of OpenAI’s generated spec).

I certainly understand why people so obsessed and hyperbolic about what these AI tools can do. The difference now is that I feel empowered to use these tools without worrying that it would lead me astray down a confusing path.

I’ve had lots of thoughts about AI before and since, but now that I pulled the trigger and started using it rather than just reading about it, there’s a lot more I want to write about, so there will be more to come.

Filed Under: AI, Programming Tagged With: ai, chatgpt, claude, harper, openai, vibecoding

Categories

  • AI
  • Blog
  • Book Review
  • Business
  • Clojure
  • Education
  • Emacs
  • Fun
  • iOS
  • Lisp
  • Personal Sprints
  • Pictures
  • Polyphasic
  • Presentations
  • Programming
  • Rails
  • Startups
  • Uncategorized

Copyright © 2025 · Minimum Pro Theme on Genesis Framework · WordPress · Log in