« GRITZ Home | Email msg. | Reply to msg. | Post new | Board info. Previous | Home | Next

Re: Do AI Models Think? 

By: De_Composed in GRITZ | Recommend this post (2)
Mon, 23 Jun 25 9:11 PM | 25 view(s)
Boardmark this board | Grits Breakfast of Champeens!
Msg. 10265 of 10360
(This msg. is a reply to 10262 by Fiz)

Jump:
Jump to board:
Jump to msg. #

fizzy:

Re: “Short answer, per the article, "No".”
I am just six chapters into Mustafa Suleyman's book, "The Coming Wave," but I've already learned enough to say that the author of your article is an idiot. He doesn't seem to know ANYTHING about AI. Here are some things that he doesn't seem to know:

When a computer beat the world chess champion, it did so with brute force. There's nothing intelligent about that. But when a computer - one of Suleyman's computers - became the world's GO champion, it used strategies that no human had ever seen. That's because it had been practicing against another DeepMind computer and had played hundreds of millions of games. The two trained themselves in all sorts of strategies humans hadn't employed or even conceived of in thousands of years. In its second game against the champ, the DeepMind A.I. made a "mistake" that had some in the audience snickering. Then it destroyed the world champ (for the second time).

The A.I. didn't defeat the old champ by using brute force. It had figured some things out.

A second thing your article's author doesn't get is that there are many A.I.s out there, and they have many specialties. An A.I. that is strong on robotic movement is not going to be good at the river crossing puzzle. Claude's strength is text generation, not logic puzzles. But a "general" a.i. is on the way. Give it 3-4 years.

Third, A.I.'s sophistication is growing at great speed by using a technique the author's company invented. He acknowledges that some say that A.I. will hit roadblocks before long. It will not continue advancing at this rate. He finds that funny, though - because his technique is just the FIRST self-learning technique that caught on. He is certain that there will be others as soon as the need arises. Right now, A.I. is advancing so quickly - at thousands of times the rate of Moore's law - that everyone is content.

Now I've thought of a fourth one.

A.I. is not the only up-and-coming technology we need to think about. Biotech is just as important and advancing just as fast. The FUSION of A.I. with biotech spawns new technologies, and that's happening already.

Your author claims nothing good comes from A.I. But does he know that prior to 2022, about 0.1% of protein structures in existence had been mapped? About 190,000. That year, DeepMind uploaded some 200 million structures in one go, representing almost all known proteins. I'd say that's pretty good.

And that's what I mean by the fusion of A.I. with other technologies.

When your author says A.I. "can't solve problems they haven’t been trained on," it makes me scratch my head. A.I. is in its infancy but it's improving at an exponential rate. We don't know what it WILL do, but we already know that it's done a lot and is getting better at a blistering pace.

Does it actually think? No. Not yet. But I'm not as sure as I once was that most people do either. Ultimately, who cares?






- - - - -
View Replies (1) »



» You can also:
- - - - -
The above is a reply to the following message:
Do AI Models Think?
By: Fiz
in GRITZ
Mon, 23 Jun 25 7:55 PM
Msg. 10262 of 10360

Short answer, per the article, "No".
What good will come from them, per the article, "None". (I'm presuming he means net of aggregate costs, because, obviously, if an AI makes you rich or saves your life, his answer can be "disputed")

http://www.zerohedge.com/ai/do-ai-models-think

Authored by Thomas Neubeger via "God's Spies" Substack,

AI can’t solve a problem that hasn’t been previously solved by a human.

- Arnaud Bertrand

A lot can be said about AI, but there are few bottom lines. Consider these my last words on the subject itself. (About its misuse by the national security state, I’ll say more later.)

The Monster AI
AI will bring nothing but harm. As I said earlier, AI is not just a disaster for our political health, though yes, it will be that (look for Cadwallader’s line “building a techno-authoritarian surveillance state”). But AI is also a disaster for the climate. It will hasten the collapse by decades as usage expands.

(See the video beow for why AI models are massive energy hogs. See this video to understand “neural networks” themselves.)

Why won’t AI be stopped? Because the race for AI is not really a race for tech. It's a greed-driven race for money, a lot of it. Our lives are already run by those who seek money, especially those who already have too much. They've now found a way to feed themselves even faster: by convincing people to do simple searches with AI, a gas-guzzling death machine.

For both of these reasons — mass surveillance and climate disaster — no good will come from AI. Not one ounce.

An Orphan Robot, Abandoned to Raise Itself
Why does AI persist in making mistakes? I offer one answer below.

AI doesn’t think. It does something else instead. For a full explanation, read on.

Arnaud Bertrand on AI
Arnaud Bertrand has the best explanation of what AI is at its core. It’s not a thinking machine, and its output’s not thought. It’s actually the opposite of thought — it’s what you get from a Freshman who hasn’t studied, but learned a few words instead and is using them to sound smart. If the student succeeds, you don’t call it thought, just a good emulation.

Since Bertrand has put the following text on Twitter, I’ll print it in full. The expanded version is a paid post at his Substack site. Bottom line: He’s exactly right. (In the title below, AGI means Artificial General Intelligence, the next step up from AI.)

Apple just killed the AGI myth
The hidden costs of humanity's most expensive delusion
by Arnaud Bertrand

About 2 months ago I was having an argument on Twitter with someone telling me they were “really disappointed with my take“ and that I was “completely wrong“ for saying that AI was “just a extremely gifted parrot that repeats what it's been trained on“ and that this wasn’t remotely intelligence.

Fast forward to today and the argument is now authoritatively settled: I was right, yeah! 🎉

How so? It was settled by none other than Apple, specifically their Machine Learning Research department, in a seminal research paper entitled “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity“ that you can find here (https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf).

“Can ‘reasoning’ models reason? Can they solve problems they haven’t been trained on? No.”

What does the paper say? Exactly what I was arguing: AI models, even the most cutting-edge Large Reasoning Models (LRMs), are no more than a very gifted parrots with basically no actual reasoning capability.

They’re not “intelligent” in the slightest, at least not if you understand intelligence as involving genuine problem-solving instead of simply parroting what you’ve been told before without comprehending it.

That’s exactly what the Apple paper was trying to understand: can “reasoning“ models actually reason? Can they solve problems that they haven’t been trained on but would normally be easily solvable with their “knowledge”? The answer, it turns out, is an unequivocal “no“.

A particularly damning example from the paper was this river crossing puzzle: imagine 3 people and their 3 agents need to cross a river using a small boat that can only carry 2 people at a time. The catch? A person can never be left alone with someone else's agent, and the boat can't cross empty - someone always has to row it back.

This is the kind of logic puzzle you might find in a children brain teaser book - figure out the right sequence of trips to get everyone across the river. The solution only requires 11 steps.

Turns out this simple brain teaser was impossible for Claude 3.7 Sonnet, one of the most advanced "reasoning" AIs, to solve. It couldn't even get past the 4th move before making illegal moves and breaking the rules.

Yet the exact same AI could flawlessly solve the Tower of Hanoi puzzle with 5 disks - a much more complex challenge requiring 31 perfect moves in sequence.

Why the massive difference? The Apple researchers figured it out: Tower of Hanoi is a classic computer science puzzle that appears all over the internet, so the AI had memorized thousands of examples during training. But a river crossing puzzle with 3 people? Apparently too rare online for the AI to have memorized the patterns.

This is all evidence that these models aren't reasoning at all. A truly reasoning system would recognize that both puzzles involve the same type of logical thinking (following rules and constraints), just with different scenarios. But since the AI never learned the river crossing pattern by heart, it was completely lost.

This wasn’t a question of compute either: the researchers gave the AI models unlimited token budgets to work with. But the really bizarre part is that for puzzles or questions they couldn’t solve - like the river crossing puzzle - the models actually started thinking less, not more; they used fewer tokens and gave up faster.

A human facing a tougher puzzle would typically spend more time thinking it through, but these 'reasoning' models did the opposite: they basically “understood” they had nothing to parrot so they just gave up - the opposite of what you'd expect from genuine reasoning.

Conclusion: they’re indeed just gifted parrots, or incredibly sophisticated copy-paste machines, if you will.

This has profound implications for the AI future we’re all sold. Some good, some more worrying.

The first one being: no, AGI isn’t around the corner. This is all hype. In truth we’re still light-years away.

The good news about that is that we don’t need to be worried about having "AI overlords" anytime soon.

The bad news is that we might potentially have trillions in misallocated capital.


« GRITZ Home | Email msg. | Reply to msg. | Post new | Board info. Previous | Home | Next