The Coffee Industry Will Regret Embracing Generative AI

Coffee professionals and brands are increasingly using generative AI for product development, marketing, and more. But should an industry that prides itself on authenticity and sustainability really be embracing such a destructive tool?

Close up of a hand holding a cup of coffee on a table, seen from above. The coffee cup has Matrix raining code inside.
Composite: Cup of Couple via Pexels and Jahobr, CC0, via Wikimedia Commons

Coffee companies love to focus on authenticity and craftsmanship, often marketing their approach as “artisanal” and imbued with generational wisdom.

An Italian roaster sells itself as “family-owned for three generations”; many others promote their “hand-roasted, small-batch coffee”. Even the biggest brands reach for similar rhetoric: “The Nespresso story is steeped in Italian heritage and culture”, the company’s website brags.

Sustainability is another key principle of the contemporary coffee industry. While some brands’ attempts to push their eco credentials can veer into coffeewashing, the industry as a whole portrays itself as environmentally and socially responsible—just see the “sustainability” page on pretty much every company’s website. Which makes coffee’s embrace of generative AI all the more concerning.

Over the past few years, as the tech giants have forced generative AI on the world at large, the coffee industry has also begun to implement it. Roasters are using it to design blends; cafes are using it to create HR documentation; multinationals are using it in their R&D processes and to “support” baristas; and social media has become a morass of generative AI slop.

Technology fads tend to wax and wane. Coffee was big into the blockchain a few years ago, and Starbucks made an ill-fated foray into NFTs and the metaverse in 2022, something it quietly phased out last year. But generative AI feels more permanent, not least because it is now entwined with so much of the global economy.

And yet, the coffee industry doesn’t have to adopt it. Sure, generative AI might save companies money, but is it worth it? The downsides of the technology for businesses have been well-documented: Generative AI chatbots are known to regularly get things wrong, which can lead to trouble for the companies that use them. 

Meanwhile, the resource-thirsty data centres that power each prompt are already having profoundly negative environmental consequences, and setting back companies’ sustainability pledges. And for all the hype, several years into the AI boom, 95% of businesses are still seeing no return on their sizeable investments.

That’s without even considering the programs’ vast copyright infringement, or their impact on the learning capabilities and critical thinking skills of the people who use them frequently.

And of course, the technology can expose users to grievous harms: Just this week, the New York Times reported on a teenager who turned to OpenAI’s ChatGPT to deal with his suicidal feelings. The chatbot encouraged him to end his life. (A note that the Times article is extremely distressing.) His parents are suing OpenAI, the latest lawsuit against generative AI companies from parents who believe them to be responsible for their children’s deaths.

Considering the flaws and dangers of generative AI—many of which explicitly contradict the coffee industry’s values—saving some money on marketing copy or social media images seem like pretty poor justifications for its continued use.

If you value independent coverage of the coffee industry, please consider becoming a paid subscriber to The Pourover:

Upgrade here!

Reactive vs. Proactive

It’s important to separate generative AI from artificial intelligence more broadly. The field of artificial intelligence research has been around for decades, and the concept itself goes back thousands of years. The works of Homer, for example, feature automatic bellows used by the god of smithing and ships that respond to their captain’s thoughts.

The study of artificial intelligence began in earnest in the 1950s, with scientists like Alan Turing exploring the capabilities of computers to “think” like humans. As a paper on the history of AI from Washington University puts it, “the main advances over the past sixty years have been advances in search algorithms, machine learning algorithms, and integrating statistical analysis into understanding the world at large”.

In the 21st century, big data and advances in computing power led to increased AI capabilities. Deep learning, what IBM describes as “a subset of machine learning that uses multilayered neural networks [...] to simulate the complex decision-making power of the human brain”, supercharged things still further, and a gold rush was born

Today, artificial intelligence is all around us. It can be incredibly useful in fields such as medicine, assisting with cancer diagnoses and data analysis (although there are ethical concerns around privacy and racial bias in how these tools are utilised).

But there is a difference between machine learning or traditional artificial intelligence and generative AI based on large language models (LLMs). “Traditional AI is reactive—focused on processing and analyzing data to provide predictions or insights”, as an article from the Massachusetts Institute of Technology describes it. “In contrast, generative AI is proactive—capable of creating something new using learned data patterns”.

Traditional AI has been used in the coffee industry for years. Cropster, the coffee software company, utilises artificial intelligence for data analysis as well as in its Roasting Intelligence roast profiling software. Machine learning has also been used in coffee grading as well as for disease identification: A 2020 article in the Tea & Coffee Trade Journal describes how Brazilian computer scientists were training a robot that “would use image processing and machine learning to identify which leaves are contaminated with coffee leaf rust, and in turn, isolate which plants need to be treated”.

These examples are genuinely good, proactive uses of technology. However, when it comes to LLMs and generative AI, the coffee industry is following other sectors by mindlessly chasing trends—and shoehorning it into operations where it isn’t needed.

If you know someone else who might enjoy this article on coffee and generative AI, why not share it with them via email?

Share The Pourover!

Fancy Coffee Autocomplete

In May, Don Howat, vice president and global category lead for Nescafé, gave an interview to Global Coffee Report in which he enthused about the company’s use of AI for product development.

Howat said that Nescafé uses AI to “scour social media and other sources to identify global trends”, as well as to “write concepts. Previously, we would all stand around a flip chart to try and craft the wording”, he said. “Now, we simply need to input the key words we want to include and then refine the results. It’s much faster and more effective”.

While Howat stressed the importance of having people around to input the prompts, he gushed over the ability of AI to speed up the product development process. “It used to take a relatively long time to launch a new product, while now an idea can be turned into a product in a matter of months”, he said.

There’s no indication yet that Nestlé has cut R&D staff in favour of a chatbot, but that is definitely one of the consequences of the recent generative AI boom—or, if you listen to executives, one of the benefits. According to a July article by Chip Cutter in the Wall Street Journal, CEOs are boasting about layoffs resulting from AI advancements. “Gone are the days when trimming head count signaled retrenchment or trouble”, Cutter writes. “Bosses are showing off to Wall Street that they are embracing artificial intelligence and serious about becoming lean”.

It’s not just coffee multinationals that are using AI to replace the human aspect of coffee. In 2024, a roastery in Finland announced that it had designed a blend using generative AI. As Jari Tanner reported for the Associated Press, Kaffa Roastery worked with Elev, a local AI consultancy, on “a trial in which it’s hoped that technology can ease the workload in a sector that traditionally prides itself on manual work”.

“Leveraging models akin to ChatGPT and Copilot, the AI was tasked with crafting a blend that would ideally suit coffee enthusiasts’ tastes, pushing the boundaries of conventional flavor combinations”, according to Elev.

Creating a good blend is a tricky and nuanced task usually undertaken by skilled and experienced coffee professionals. Punching a few prompts into a computer is not the same thing. And anyway, “coming up with coffee blends” is hardly a significant pain point—has the coffee industry really experienced stalled progress or drops in revenue because of a backlog in coffee blend developments? Why is this a problem that a resource-intensive piece of software needs to solve instead of the humans who are already trained and capable of doing so?

Moreover, an LLM or generative AI programme can’t actually “create” anything—it doesn’t “think”. Instead, as Emily Bender, the director of the University of Washington’s Computational Linguistics Laboratory, and colleagues put it in a 2021 paper, it is a “stochastic parrot”, “haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning”. Others have called these models “fancy autocomplete” or “plagiarism machines”.

Ah yes, the plagiarism. In order to train their LLMs, companies like OpenAI and Anthropic (which makes its own chatbot, Claude) scrape the internet for data that the programmes can “learn” from. They do this pretty much indiscriminately, which is why they have faced several lawsuits from writers and artistsand even Disney!—over copyright claims.

Also, LLMs get things wrong—a lot. Often referred to as “hallucinations”, chatbots like ChatGPT and Google’s Gemini will regularly, and confidently, make things up. One study from Stanford found that “hallucination rates range from 69% to 88% in response to specific legal queries for state-of-the-art language models”, a range the authors called “pervasive and disturbing”. Google’s AI search overview has told people to eat rocks, pour glue on pizza, and given other “odd, inaccurate or unhelpful” tips, as the company put it. (Here’s a way to bypass the AI search overview, if you’re so inclined.)

Widespread, Worrisome, Dangerous. 

But the real problem with generative AI is that it can be truly dangerous, for both people and the planet.

There have been several examples of ChatGPT causing psychosis, driven by the model’s sycophancy and tendency to reinforce users’ beliefs. Some users have been jailed or involuntarily committed as a result; one had a mental breakdown and charged police with a knife, who shot him dead.

Google’s Gemini chatbot told a college student “You are a stain on the universe. Please die. Please”. LLMs have been linked to several suicides, and just this week a Wall Street Journal post reported on a case where ChatGPT “fueled a 56-year-old tech industry veteran’s paranoia”, leading him to kill his mother and himself.

These incidents are, so far, outliers. But the more widespread impacts, while less extreme, are still worrisome. Using these products makes people lonelier, and can negatively affect students’ brains. Meanwhile, there is an entire subreddit of people who post about their “relationships” with chatbots, and Elon Musk’s Grok chatbot, which has 20 million monthly users, has gone full Nazi.

Then there are the environmental impacts. Because of their vast computing needs—one ChatGPT prompt requires 10 times the energy of a traditional Google search, and that’s not even counting the cost of training the models—the data centres that power generative AI tools consume vast quantities of electricity and water.

Data centres currently consume 2% of all electricity generated globally; in Ireland, it’s 20%. That global number is predicted to double by the end of the decade. While some of these enormous projects use renewable energy, that isn’t true in the U.S., where they are often powered by coal and gas.

One such facility in Memphis, owned by Musk’s xAI, operates 33 methane-powered gas turbines to run its supercomputer. As Ren Brabenec reports for Tennessee Lookout, the facility “is located in a poor, predominantly Black Memphis community with historically high rates of pollution-related illness and disproportionate rates of industrial pollutants”. The turbines— Musk’s company only has permits for 15 of the 33—“likely make xAI the largest industrial source of smog-forming pollutant in Memphis” according to the Southern Environmental Law Center.

AI data centres’ water use is also an issue, especially at a time when clean drinking water is becoming ever more scarce. The hardware that drives generative AI requires millions of gallons of clean water to keep it cool. According to one estimate, by 2027 global AI infrastructure will require six times as much water as the entire country of Denmark.

These factors make it particularly hard to square the coffee industry’s increasing embrace of generative AI with its supposed commitment to sustainability. Nescafé, for example, boasts about how it is reducing its greenhouse gas emissions, and says that it is “committed to reducing water consumption”. The Finnish roaster, Kaffa, says on its website that it is committed to supporting the United Nations Sustainable Development Goals, one of which is “climate action”.

There is an obvious contradiction here, but much like the hypocrisy of “sustainable” airline coffee, it isn’t being acknowledged by those involved. Do they not know about the negative aspects of generative AI—or do they just not care?

You can also support my independent coffee coverage with a one-time tip:

Send The Pourover a tip here!

‘Are We Really Doing This? Who Thought This Was a Good Idea?’

I’ve been thinking about generative AI, and how it relates to coffee, for a long time. I’ve watched its use grow among colleagues and others in the industry, and am coming across ever more examples on social media, by coffee publications, and even out in the real world.

I saw a coffee consultant recommend that her followers use ChatGPT to simplify the (admittedly very complex) coffee commodity market, despite a 2024 Purdue University study finding that it gives incorrect information 52% of the time. I’m seeing more and more uncanny, unreal images that make me feel queasy, and stumbling across bland blogs and corny, bullet-point heavy, ChatGPT-“written” LinkedIn posts.

Increasingly, coffee professionals and brands are using chatbots and image generators like it’s the most natural thing in the world. I started to think that maybe I was the outlier after all. But then I read this great piece by Charlie Warzel in The Atlantic called “AI Is a Mass-Delusion Event”, and it clarified a lot of things for me.

Warzel describes the strangeness of watching an interview that former CNN anchor Jim Acosta conducted with an AI-generated persona of Joaquin Oliver, a teenager who was killed during the mass shooting at Marjory Stoneman Douglas High School, in Parkland, Florida in 2018. 

“The interview triggered a feeling that has become exceedingly familiar over the past three years”, Warzel writes. “It is the sinking feeling of a societal race toward a future that feels bloodless, hastily conceived, and shruggingly accepted. Are we really doing this? Who thought this was a good idea?

Coffee is, at its core, about connection. It’s a beverage, sure, but it’s also a way to express kindness and hospitality and to engage positively with the world. Making a really good cup of coffee takes dedication and creativity and skill. Sure, you can get a latte made by a robot arm, and it might be novel in the moment, but it will always lack something. A barista offers that something—human connection, and the dedication to a worthy craft.

This is especially true in specialty coffee, which prides itself on its commitment to ideals like authenticity and sustainability. If we cede the parts of the coffee process that involve expertise, dedication, and imagination—if we let plagiarism machines create blends and come up with products—then we might as well give up and allow the few multinationals that already dominate the industry to win outright.

Ultimately, none of us have to use generative AI. It was only a few years ago that we all did without it quite happily. Much as the tech giants want us to feel incapable of doing things ourselves—to sweep us along on an irrevocable tide of “progress” that just happens to massively enrich them—it’s worth remembering that we already have everything we need to create meaningful coffee experiences, and to address the industry’s challenges.

In the end, it’s worth asking whether a technology that explicitly opposes coffee’s values has anything to offer the industry besides harm and extraction.

Thanks for reading! If you'd like to get even more of The Pourover, become a paid subscriber. Bonus articles, first access to new interviews, and more:

Upgrade here! ✨

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Pourover.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.