On generative AI, or The future is now

Chris Reads
5 min readApr 11, 2024

--

Generative artificial intelligence has reached its peak. I’ve been saying this for a few months now, but I wanted to have it in writing for posterity or to eat crow. I do not work in, or adjacent to the industry, but wanted to comment as an outsider. Despite the well-deserved excitement about its human-like ability to answer questions, create stunning visuals, and even write computer code, I wholly believe we are hitting the peak of the hype cycle. Though it will change the way we work and live, in some ways that we don’t yet foresee, I see it as a much weaker form of AI than it passes itself off as, will only make gradual ripples after its big splash, and is only a small part of the pathway to artificial general intelligence.

When powerful chatbots came into the public eye late 2022 and subsequently improved in early 2023, they were heralded as gods from the machine. Here was a program that could talk to us like a person. More than that, it could do things we couldn’t do, and more quickly. It had not only passed the Turing Test, but it seemed like it was smarter than us. Students were using it for their homework assignments, it was helping people write programs, and one of my friends even rewrote one of my blogs by feeding it the first paragraph. More than that, it could generate images and videos, so everyone was imagining what else it could do. There were lawsuits and strikes, fears of job loss and mass redundancies.

But the shine is starting to wear off. Less and less are educators and workers screaming about it. After siphoning funding from all other tech startups, investment in AI is also starting to dry out. To top it off, there was recently a poorly planned Willy Wonka Experience in Glasgow whose shortcomings were blamed on AI. Of course, the failures of one con artist aren’t representative of an entire technology, but it is certainly an example of its current limits. Like any other algorithm, generative AI is a tool to enable human productivity. It’s one of the most powerful tools that we’ve seen in a while and will undoubtedly require some portion of the workforce to retrain, but so has any other historical technology.

AutoCAD didn’t eliminate the field of technical drawing, but made the drafting process more efficient and less painstaking. Calculators didn’t eliminate basic arithmetic from elementary math curriculums, but reduced the focus on arbitrary calculations and rote memorization to real math. Sure, an entire industry disappeared after the invention of various physical technologies, but not overnight and not entirely: scribes, candlemakers, and horse-drawn carriages took several generations to be replaced, and still exist to this day. Plato believed that all knowledge was recollection, and thus those who had the largest intelligence were those with the best memories.

With the advent of generative AI, entry level jobs will eventually trickle out of knowledge and creative industries. Its current use cases are largely research and ideation, both jobs that are traditionally punted to analysts. But as far as I can see, that’s the extent of it. It’s fitting that there are reports of Apple poaching AI researchers: generative AI is like a better Siri, the realization Tony Stark’s J.A.R.V.I.S. Unfortunately, that’s easily confused with Kubrick’s HAL which leads to confusion and fears about its abilities and intentions. Like students armed with calculators approaching a secondary school math test, generative AI will simply improve productivity.

The question is who captures this productivity yield. Historically, nothing has been evenly distributed: the kings and feudal lords reaped the labours of the peasants, the robber barons of the Industrial revolutions, and possibly the tech monopolists will stand to gain everything from these AI platforms: licensing them for personal and professional use, and allowing the corporations to profit at the worker’s expense. However, technological improvements have improved the living standards of even the workers: I’d rather be me today than Rockefeller or JP Morgan of a hundred years ago. I have tasted, seen, heard, and done things their puny-twentieth century brains wouldn’t believe.

Like with all new technologies, I predict generative AI and AI in general will not cause the bottom to fall out of industries, but actually constitute a new bottom, causing the middle to disappear instead. Just like Sears, J. Crew, and Zellers have struggled to find a spot among low-cost competitors with an increasingly high-quality output (Amazon, Uniqlo, and Zara), lower and middle end service industry jobs will be automated, but the personal touch is still required for anything that justifies the price tag. Likewise, SEO copywriters and illustrators will be out of a job, but Superbowl ads and vanity blogs still require human creativity. There will be a small economic blip as the fear of jobs in certain fields disappearing reduces the number of people training in these fields, thereby granting more job security for those who are already in those fields.

But for those who fear artificial general intelligence is taking over our lives, I say there is no concern. The Turing test is no longer a gold standard for AI ability as it has been comfortably passed many years ago. Though it sounds like a human, and talks like a human, it’s not a human. It’s also not an artificial general intelligence. The Chinese room is a thought experiment developed on the basis of the Turing test, and supposes a room with a person who doesn’t understand Chinese. However, they have a large set of instructions that tell them how to respond in Chinese when they receive a prompt in Chinese through the window. This is the equivalent of what generative AI is: something that passes the Turing test, but has no idea what it’s doing, only running statistical analyses on what the next word should be.

Will generative AI be able to create something truly novel if it can only regurgitate? News recently broke of a generative AI company that focused on writing code: everyone’s minds immediately turned to The Matrix and The Terminator. I have no doubts that it will be of great assistance to software engineers, but the same way ChatGPT is only able to look up existing repositories and suggest sections of code, software using statistical models will not create artificial general intelligence, only be able to turn requests in natural language into code. Software engineers will still exist: they might not need to know programming languages, but will still need to solve problems using limited resources.

Don’t get me wrong: I’m only emphasizing that generative AI isn’t the artificial general intelligence everyone is making it out to be. I have no doubt that at some point, with efficient enough algorithms or powerful enough computing power, humanity will have the artificial general intelligence of movies and books. We will live happily in post-scarcity universe, cruising through galaxies millions of light years away. But despite my disenchantment with generative AI, until we have Samantha from Her, I’m happy that we have Iron Man’s J.A.R.V.I.S. The future is now.

--

--