I haven't even used "foundation models" (still refuse) and now that seems out of fashion and we're at "frontier models."
The lack of uncritical uptake of all these terms pushing #AIHype is incredible.
I haven't even used "foundation models" (still refuse) and now that seems out of fashion and we're at "frontier models."
The lack of uncritical uptake of all these terms pushing #AIHype is incredible.
“…language models can fundamentally be described as supercharged autocomplete tools, prone to returning incorrect information because they are skilled at creating a facsimile of a human-written sentence—something that looks like an acceptable response—but chatbots are not doing any critical “thinking.”“
—Thomas Maxwell, Microsoft's Satya Nadella Pumps the Brakes on AI Hype
#llm #llms #hallucinations #ai #aihype
“Klarna…garnered a lot of attention and investor excitement after its CEO said last year it would replace many of its employees with AI… The CEO recently walked that claim back… Klarna had replaced a basic phone tree system with AI, which…may have resulted in customers quitting chats out of frustration.”
—Thomas Maxwell, Microsoft's Satya Nadella Pumps the Brakes on AI Hype
#klarna #aichatbots #ai #aihype
“Microsoft is the primary backer of OpenAI, whose CEO Sam Altman has long stoked nebulous fears about AI taking over the world. Critics often say Altman’s fear mongering is primarily intended to place himself at the centers of power…”
—Thomas Maxwell, Microsoft's Satya Nadella Pumps the Brakes on AI Hype
#microsoft #openai #altman #ai #aihype
“…stop spreading hype about a so-called “artificial general intelligence” that could replace humans in most tasks. Nadella said essentially that it will not happen, and either way is an unnecessary distraction when the industry needs to get practical…”
—Thomas Maxwell, Microsoft's Satya Nadella Pumps the Brakes on AI Hype
#ai #agi #aihype
“The world has yet to turn any of today’s AI hype and spending into a meaningful lift in the actual economy…”
—Thomas Maxwell (20250220)
https://gizmodo.com/microsofts-satya-nadella-pumps-the-breaks-on-ai-hype-2000566483
#ai #aihype
Watching this *AFTER* Humane shut down gives a new meaning to the phrase, we told you so https://www.youtube.com/watch?v=TitZV6k8zfA #AI #AIHype #Humane
"If Silicon Valley is going to stave off another devastating AI winter — let alone usher us into an AI utopia — then it's going to need more than just the brute power of compute, data, and cash. It'll also need to exploit the power of belief.
We have plenty of recent examples that show even investing billions of dollars into a speculative tech may not be enough to guarantee the realization of a dream if people lose interest and stop feeding it with their attention. Metaverse? A distant memory. Web3? Sorry, wrong number. Google Glass? Never heard of it.
AI depends on vital support from people hard at work in the futurism factory. These are the executives, consultants, journalists, and other thought leaders whose job is the selling of things to come. They craft visions of a specific future — such as ones where AI models built by companies like OpenAI or Microsoft are undeniable forces of progress — and they build expectations in the public about the inevitable capabilities and irresistible outcomes of these tech products.
By flooding the zone with an endless stream of new partnerships, new products, new promises, the tech industry makes us feel disoriented and overwhelmed by a future rushing at us faster than we can handle. The desire to not be left behind — or taken advantage of — is a powerful motivator that keeps us engaged in the AI sales pitch. The breathless hype surrounding AI is more than just a side-effect of over-eager entrepreneurs; it’s a load-bearing column for the tech sector. If people believe hard enough in the future manufactured by Silicon Valley, then they start acting like it already exists before it happens. Thus the impacts of technologies like AI become a self-fulfilling prophecy."
Okay, one more time for the people in the back.
The "AI" () craze of the past few years is all about Large Language Models. This immediately tells us that the only thing these systems "know" is trends/patterns in the ways that people write, to the extent that those patterns are expressed in the text that was used to train the model. Even the common term, "hallucination," gives these things far too much credit: a hallucination is a departure from reality, but an LLM has no concept of reality to depart from!
An LLM does exactly one thing: you give it a chunk of text, and it predicts which word will come next after the end of the chunk. That's it. An LLM-powered chatbot will then stick that word onto the end of the chunk and feed the resulting, slightly longer chunk back into the model to predict the next word, and then do it again for the next, etc. Such a chatbot's output is unreliable by design, because there are many linguistically valid continuations to any chunk of text, and the model usually reflects that by having an output that means, "There is a 63% chance that the next word is X, a 14% chance that it's Y, etc." The text produced by these chatbots is often not even correlated with factual correctness, because the models are trained on works of fiction and non-fiction alike.
For example, when you ask a chatbot what 2 + 2 is, it will usually say it's 4, but not because the model knows anything about math. It's because when people write about asking that question, the text that they write next is usually a statement that the answer is 4. But if the model's training data includes Orwell's Nineteen Eighty-Four (or certain texts that discuss the book or its ideas), then the chatbot will very rarely say that the answer is 5 instead, because convincing people that that is the answer is a plot point in the book.
If you're still having trouble, you can think of it this way: when you ask one of these chatbots a question, it does not give you the answer; it gives you an example of what—linguistically speaking—an answer might look like. Or, to put it even more succinctly: these things are not the Star Trek ship's computer; they are very impressive autocomplete.
So LLMs are fundamentally a poor fit for any task that is some form of, "producing factually correct information." But if you really wanted to try to force it and damn the torpedos, then I'd say you basically have two options. I'll tell you what they are in a reply.
Study Finds Consumers Are Actively Turned Off by Products That Use AI
—@Futurism
"When AI is mentioned, it tends to lower emotional trust, which in turn decreases purchase intentions," said lead author and Washington State University clinical assistant profess of marketing Mesut Cicek in a statement. "We found emotional trust plays a critical role in how consumers perceive AI-powered products."
https://futurism.com/the-byte/study-consumers-turned-off-products-ai
I’ve been a #DuckDuckGo user for years, but imo their search results have been slowly deteriorating, with the top results having been mostly ads for a while. With the introduction of «AI summaries» I’ve had it, and I’m looking for alternatives. What are your recommendations? What do other #vivaldi users use for search? Besides good and relevant search results, the wish list includes:
- strong #Privacy orientation/stance
- no «AI»
- no ads
- not dependent on google/bing index
- EU/Europe hosted
@emilymbender Just in case anyone is running into this thread now, this excellent essay about #AIHype has now moved here:
https://ninelives.karawynnlong.com/language-is-a-poor-heuristic-for-intelligence/
Great story from OPB:
"AI slop is already invading Oregon’s local journalism"
https://www.opb.org/article/2024/12/09/artificial-intelligence-local-news-oregon-ashland/
"Silicon Valley’s latest hot technology is being used to further degrade the news available to Oregonians"
I'm already an OPB member but I might send them another donation in support!