Is AI/ChatGPT as exciting as people say?

On this page, I’m collecting together some links about AI that I want to refer back to. Those are below; but first I wanted to discuss what I find interesting about AI right now.

I’ve been suspicious of the hype around AI for a long time, but I keep reading articles saying that AI models will revolutionise the economy, replacing millions of jobs. I’m even seeing serious articles saying that these new AIs are approaching sentience and are an existential threat to humanity.

One theory I’ve read is that the AI hype is so loud because the crypto grifters have moved onto this: there is always a profit to be made from hyping the next big thing. It was also an amazing marketing move from OpenAI to claim that their early models were too dangerous for people to have open access.

At the same time, very smart people I know, people I trust, are telling me there is something important here. And I’ve had several dreams about prompting AIs – so I should not be dismissing this too easily.

I’ve been playing a little with ChatGPT recently. Showing a friend how it could generate an email was an enlightening moment. He doesn’t like writing formal messages and having a detailed text produced from a prompt seemed revolutionary. My own experiments show that ChatGPT is good at certain types of output, but its grasp on facts is hazy. Asking it about things I know well, like hiking routes, it returns plausible information, but lacks telling details and gets significant facts wrong. This is beguiling, miraculous technology, but it (currently) has very clear limits.

Links on AI

The following are some good pieces that I have read on AI:

My current view

I find it hard to see how these huge statistical models are related to ‘true intelligence’, even as they raise questions by doing things that we once thought relied on intelligence. One notable thing is that (as with machine translation) these models are entirely reliant on human-produced work. This has led to the ethical questions around the model incorporating copyrighted works – and I note (via the Washington Post) that this blog is one of the sources for Google’s C4 data set:

I also wonder how much further these models can go. It won’t be long before the deluge of AI content begins to be absorbed into the models, which may undermine their effectiveness.

I also suspect there is a limit to the effectiveness of LLMs for problem solving. Matt Webb’s Braggoscope is the most compelling experiment I’ve seen, where ChatGPT was used to classify the thousands of In Our Time podcasts into the dewey decimal system. It’s a task where small inaccuracies will cause little harm, and Webb estimates that the automation of this was 108x faster than doing it manually.

But for tasks like programming, much of the art is not in producing the code, but figuring out what code needs to be written. It’s possible that a new paradigm of programming emerges from AI, but for any form of programming as we currently understand it, the trick is not writing the code but defining what code we want written, and making sure that we have achieved our aims.

The difficulty with AI is in producing very specific text. Producing remarkable sonnets about odd subjects is breath-taking, but getting an AI to write Allen Ginsberg’s Howl or Pierre Menard’s Selections from Quixote would be a different matter.

If you want to follow what I'm up to, sign up to my mailing list

Leave a Reply

Your email address will not be published. Required fields are marked *