What if super-intelligent AI1 arrives but it can be run by anyone, basically for free?

To me, this is one of the most interesting questions in the field. I’m not expecting AGI to arrive anytime soon, but I do expect models to keep getting more capable (especially for objective tasks like math and programming) and keep getting more efficient.

I had an interesting meeting last week with a founder who’s instructed his team to never worry about LLM costs – and over the last year it’s been great bet as costs have gone down while usage is up. He’s growing his own value, in the form of the apparatus built around the models, while the market value of the models themselves is plummeting.

The tasks models can perform a year ago are 10x cheaper today. Sometimes it’s even more significant: Sam Altman wrote this week:

The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.

But when it comes to the efficacy of these models, it’s not as unified a story.

Sam expects, “the impact of AGI2 to be uneven. Although some industries will change very little, scientific progress will likely be much faster than it is today; this impact of AGI may surpass everything else.”

I agree the impact will be uneven, mostly because of the capabilities of the best models, rather than just structural contexts. Models will continue to excel at objectively measurable tasks – like math and programming – while lagging behind in qualitative tasks, thanks to the nature of synthetic data. As I wrote in December:

Models will get better at testable skills: Quantitive domains – like programming and math – will continue to improve because we can create more novel, massive, synthetic datasets thanks to unit tests and other validation methods. Qualitative chops and knowledge bank capabilities will be more difficult to address with synthetic data techniques and will suffer from a lack of new organic data.

So then let’s hone the question further: what if AI puts programmers out of a job, but everyone with a laptop has access to free programmers?

Sure, there’s the caveats about how skilled or unskilled these roles and models will be, but if quality and efficiency continue to march on, hand in hand, the two-fold effect of taking away work while simultaneously gifting us labor will be interesting.


  1. I wanted to include a footnote about how I’m using the term “AGI” here, since I generally dislike the way it is used for generating hype, fear, and whatnot. When I say, “AGI,” I mean, “an LLM powered application which can do a human job, autonomously.” In this case, it might mean watching for Github issues and fixing them. I do not use the term “AGI” to suggest that these models will actually be intelligent, sentinent, or anything other than programs capable of performing a job (don’t make me tap the sign!

  2. What’s hilarious is that Sam too felt the need to include a footnote about his usage of “AGI”! Stating, “By using the term AGI here, we aim to communicate clearly, and we do not intend to alter or interpret the definitions and processes that define our relationship with Microsoft. We fully expect to be partnered with Microsoft for the long term. This footnote seems silly, but on the other hand we know some journalists will try to get clicks by writing something silly so here we are pre-empting the silliness…” While I get the frustration, I think so much of this is due to OpenAI and Sam himself taking advantage of this vagueness when pitching OpenAI or talking about AI in general. Sometimes it’s just computer software, other times it’s super intelligence right around the corner. The language is mushy, but he’s leveraged that mushiness.