
I have to admit, I do find it a bit amusing to watch someone climb onboard a hypetrain at the exact moment that said train is hurtling off a cliff. It does spoil the fun though, when that someone is an apex organization for the national coopererative movement. Unfortunately, that's what it felt like last week when I happened upon a video of NCBA CLUSA's most recent Co-op Circle Happy Hour. The topic for the event was "Cooperatives on AI, Data, and Democracy"; and the first 10 minutes consisted of a presentation by NCBA's Senior Director of Information Technology about ways that co-ops can use "AI."
I put "AI" in scare-quotes because that term covers a broad range of tools that have very little, if anything, in common with each other. Most often, when people refer to AI, what they mean is a Large Language Model (LLM) chatbot - which is a computer application for producing statistically probable text in response to user input. Algorithms for selecting chess moves are also called "AI" in the common vernacular, though these types of machine learning applications have essentially nothing in common with LLMs.The NCBA event was about the first type of AI — LLMs and applications built on top of them — which for ease of discission I will hereafter refer to as Generative AI (or just Gen AI).
So as not to bury the lede too much, I'll just put it bluntly: the current Gen AI craze is entirely hype, and is being driven not by some world-changing technological breakthrough, but by the perverse incentives of corporate executives and venture capitalists. While many of us have been pointing this out since the begininng of the fad1, now even advocates are being forced to admit that, despite all the breathless news stories and cocksure proclamations, there is, in fact, no there there when it comes to Gen AI.
To wit, a recent report from the group NANDA that surveyed 300 Gen AI initiatives and 156 executives, found that 95% of Generative AI projects produce zero return for the companies adopting them. NANDA is an MIT project to build "AI agents," and as such they have a vested interest in making the technology look as good as possible, which makes their report something that lawyers refer to as "an admission against interest." In fact, the paper is a litany of damning evidence for Gen AI boosters. Here's how they begin their Executive Summary:
Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return...Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact.
I shouldn't really have to say anything else about why co-ops should not, under any circumstances be adopting Gen AI anything - 10s of billions of dollars have been spent by traditional businesses on Gen AI in order to extract millions in revenue; i.e. Gen AI has been a massive money loser for those companies adopting it. Just like with crypto and DAOs, Gen AI is a solution in search of a problem. Unfortunately, there aren't actually that many problems that can be solved by a computer application that generates statistically probable text in response to user input which, again, is all that an LLM actually is.
The NANDA report authors spend many pages delving into the reasons for such poor outcomes from this supposedly game-changing tech, most of which boil down to some combination of "it doesn't work well," "it's insecure," and "it's too inflexible to be useful." Again, not at all surprising if we keep in mind what "Gen AI" is in reality. Here are a few quotes from executives that get to the meat of the problems:
"It's useful the first week, but then it just repeats the same mistakes. Why would I use that?"
"I can't risk client data mixing with someone else's model, even if the vendor says it's fine."
"Our process evolves every quarter. If the AI can't adapt, we're back to spreadsheets."
These glaring and omnipresent problems with Gen AI are probably the reason why during the NCBA presentation there was a notable lack of any concrete examples of Gen AI actually being used. NCBA apparently "looked at" Grantable, a Gen AI grant writing tool, but there was no mention of adoption. There were a few vague encouragements for co-ops to "mess around" with the free LLM chatbots, but nothing really actionable was recommended. I don't know what other piece of software anybody would take seriously if you, the customer, were expected to figure out how to make it useful.
And I don't know what other piece of software would be taken seriously if its developer said publicly that it shouldn't be used for anything important, which is exactly what Microsoft did when they recently launched their newest Gen AI dumpster fire, Copilot. As reported by PC Gamer,
The new "COPILOT" function allows you to skip writing Excel formulas yourself by telling Copilot what you want to do and the cells you want to use...However, Microsoft specifically warns not to use it for "any task requiring accuracy or reproducibility," like numerical calculations. Microsoft also advises against using the feature for "financial reporting, legal documents, or other high-stakes scenarios,"...
Got that? One of the biggest tech companies in the world has put their considerable resources into building a Gen AI tool, and it can only be trusted with tasks that don't require accuracy or doing math.
There were similar warnings from the hosts of the NCBA webinar, specifically that any documentation fed into an LLM should be thoroughly anonymized to protect confidentiality, and that anything output by Gen AI requires thorough vetting by a human (with actual intelligence) to ensure accuracy — a combination of requirements that I would think greatly reduces any value the statistically plausible text generators might provide.
To his credit, NCBA's IT director does point out that "AI" is mostly just a buzz-word to add perceived value to tech products, and that in many instances there is no reason to use any kind of Gen AI, as the current methods work just fine. Unfortunately, that frankness only came out after receiving push-back from a couple of the attendees; the overall tone of the event was "mess around with Gen AI, maybe it'll be useful!" But we have abundant evidence at this point in the life-cycle of this no-longer-new technology to say definitively that there is no real use for it — which is good because, as one of the AI critics on the call pointed out, the environmental footprint of Gen AI is massive, and certainly isn't justified by the unreliable output it generates.
This blog has gone on long enough, and if you haven't gotten the point by now another 10,000 words probably isn't going to make any difference. However, if you want to dive deeper and are interested in understanding the reality (as compared to the hype) of the Gen AI fad, I highly suggest following Ed Zitron's blog, Where's Your Ed At?, and David Gerard's Pivot to AI blog. The good news is that it looks like the AI bubble may be deflating, and the majority of the data centers that we've been told will be needed to facilitate our sci-fi AI future will never actually be built. That's good for the planet, and good for us. Now just do your best to keep your co-op away from Gen AI until this fad dies a much needed death.
- 1
Anyone familiar with the ELIZA effect saw right through it immediately.
Add new comment