tutoteket.no is one of the many independent Mastodon servers you can use to participate in the fediverse.
Tutoteket er ein liten server med liten plass, men vi har lesestoff og god drikke, så vi klarar oss.

Administered by:

Server stats:

7
active users

#aihype

0 posts0 participants0 posts today
Replied in thread

“Eventually, push will come to shove and the tech industry will have to prove that the world is willing to put down real money to use all these [AI] tools they are building. Right now, the use cases…are marginal.”
—Thomas Maxwell, Microsoft's Satya Nadella Pumps the Brakes on AI Hype
#ai #aihype

Replied in thread

“…language models can fundamentally be described as supercharged autocomplete tools, prone to returning incorrect information because they are skilled at creating a facsimile of a human-written sentence—something that looks like an acceptable response—but chatbots are not doing any critical “thinking.”“
—Thomas Maxwell, Microsoft's Satya Nadella Pumps the Brakes on AI Hype
#llm #llms #hallucinations #ai #aihype

Replied in thread

“Klarna…garnered a lot of attention and investor excitement after its CEO said last year it would replace many of its employees with AI… The CEO recently walked that claim back… Klarna had replaced a basic phone tree system with AI, which…may have resulted in customers quitting chats out of frustration.”
—Thomas Maxwell, Microsoft's Satya Nadella Pumps the Brakes on AI Hype
#klarna #aichatbots #ai #aihype

Replied in thread

“…eventually there needs to be actual demand on the other side for the products being built or else these companies will crash and burn.”
—Thomas Maxwell, Microsoft's Satya Nadella Pumps the Brakes on AI Hype
#ai #aihype

Replied in thread

“Microsoft is the primary backer of OpenAI, whose CEO Sam Altman has long stoked nebulous fears about AI taking over the world. Critics often say Altman’s fear mongering is primarily intended to place himself at the centers of power…”
—Thomas Maxwell, Microsoft's Satya Nadella Pumps the Brakes on AI Hype
#microsoft #openai #altman #ai #aihype

Continued thread

“…stop spreading hype about a so-called “artificial general intelligence” that could replace humans in most tasks. Nadella said essentially that it will not happen, and either way is an unnecessary distraction when the industry needs to get practical…”
—Thomas Maxwell, Microsoft's Satya Nadella Pumps the Brakes on AI Hype
#ai #agi #aihype

"If Silicon Valley is going to stave off another devastating AI winter — let alone usher us into an AI utopia — then it's going to need more than just the brute power of compute, data, and cash. It'll also need to exploit the power of belief.

We have plenty of recent examples that show even investing billions of dollars into a speculative tech may not be enough to guarantee the realization of a dream if people lose interest and stop feeding it with their attention. Metaverse? A distant memory. Web3? Sorry, wrong number. Google Glass? Never heard of it.

AI depends on vital support from people hard at work in the futurism factory. These are the executives, consultants, journalists, and other thought leaders whose job is the selling of things to come. They craft visions of a specific future — such as ones where AI models built by companies like OpenAI or Microsoft are undeniable forces of progress — and they build expectations in the public about the inevitable capabilities and irresistible outcomes of these tech products.

By flooding the zone with an endless stream of new partnerships, new products, new promises, the tech industry makes us feel disoriented and overwhelmed by a future rushing at us faster than we can handle. The desire to not be left behind — or taken advantage of — is a powerful motivator that keeps us engaged in the AI sales pitch. The breathless hype surrounding AI is more than just a side-effect of over-eager entrepreneurs; it’s a load-bearing column for the tech sector. If people believe hard enough in the future manufactured by Silicon Valley, then they start acting like it already exists before it happens. Thus the impacts of technologies like AI become a self-fulfilling prophecy."

futurism.com/ai-tinkerbell

Futurism · AI Is Like Tinkerbell: It Only Works If We Keep Clapping So It Doesn't DieBy Jathan Sadowski

Okay, one more time for the people in the back.

The "AI" (🤮) craze of the past few years is all about Large Language Models. This immediately tells us that the only thing these systems "know" is trends/patterns in the ways that people write, to the extent that those patterns are expressed in the text that was used to train the model. Even the common term, "hallucination," gives these things far too much credit: a hallucination is a departure from reality, but an LLM has no concept of reality to depart from!

An LLM does exactly one thing: you give it a chunk of text, and it predicts which word will come next after the end of the chunk. That's it. An LLM-powered chatbot will then stick that word onto the end of the chunk and feed the resulting, slightly longer chunk back into the model to predict the next word, and then do it again for the next, etc. Such a chatbot's output is unreliable by design, because there are many linguistically valid continuations to any chunk of text, and the model usually reflects that by having an output that means, "There is a 63% chance that the next word is X, a 14% chance that it's Y, etc." The text produced by these chatbots is often not even correlated with factual correctness, because the models are trained on works of fiction and non-fiction alike.

For example, when you ask a chatbot what 2 + 2 is, it will usually say it's 4, but not because the model knows anything about math. It's because when people write about asking that question, the text that they write next is usually a statement that the answer is 4. But if the model's training data includes Orwell's Nineteen Eighty-Four (or certain texts that discuss the book or its ideas), then the chatbot will very rarely say that the answer is 5 instead, because convincing people that that is the answer is a plot point in the book.

If you're still having trouble, you can think of it this way: when you ask one of these chatbots a question, it does not give you the answer; it gives you an example of what—linguistically speaking—an answer might look like. Or, to put it even more succinctly: these things are not the Star Trek ship's computer; they are very impressive autocomplete.

So LLMs are fundamentally a poor fit for any task that is some form of, "producing factually correct information." But if you really wanted to try to force it and damn the torpedos, then I'd say you basically have two options. I'll tell you what they are in a reply. 🧵

👀 Study Finds Consumers Are Actively Turned Off by Products That Use AI
@Futurism

"When AI is mentioned, it tends to lower emotional trust, which in turn decreases purchase intentions," said lead author and Washington State University clinical assistant profess of marketing Mesut Cicek in a statement. "We found emotional trust plays a critical role in how consumers perceive AI-powered products."

futurism.com/the-byte/study-co

Futurism · Study Finds Consumers Are Actively Turned Off by Products That Use AIBy Victor Tangermann

I’ve been a #DuckDuckGo user for years, but imo their search results have been slowly deteriorating, with the top results having been mostly ads for a while. With the introduction of «AI summaries» I’ve had it, and I’m looking for alternatives. What are your recommendations? What do other #vivaldi users use for search? Besides good and relevant search results, the wish list includes:
- strong #Privacy orientation/stance
- no «AI»
- no ads
- not dependent on google/bing index
- EU/Europe hosted

#SearchEngine #AIHype @Vivaldi

Great story from OPB:

"AI slop is already invading Oregon’s local journalism"

opb.org/article/2024/12/09/art

"Silicon Valley’s latest hot technology is being used to further degrade the news available to Oregonians"

I'm already an OPB member but I might send them another donation in support!

OPB · AI slop is already invading Oregon’s local journalismBy Ryan Haas
WIRED has a fairly terrible AI booster story today:

Yes, That Viral LinkedIn Post You Read Was Probably AI-Generated
A new analysis estimates that over half of longer English-language posts on LinkedIn are AI-generated, indicating the platform’s embrace of AI tools has been a success.

I'm not going to link it. It should be easy enough to find if you must.

"indicating the platform's embrace of AI tools has largely polluted the information ecosystem there" is how it ought to read. "Success" is a wholly inappropriate word to use.

#AI #GenAI #GenerativeAI #WIRED #LinkedIn #AIHype