« GRITZ Home | Email msg. | Reply to msg. | Post new | Board info. Previous | Home | Next

Re: Do AI Models Think?

By: Fiz in GRITZ | Recommend this post (0)
Mon, 23 Jun 25 10:20 PM | 11 view(s)
Boardmark this board | Grits Breakfast of Champeens!
Msg. 10268 of 10352
(This msg. is a reply to 10265 by De_Composed)

Jump:
Jump to board:
Jump to msg. #

De: Have you seen this? I don't know. It is only an article. I am quite sure we are not going to stop the continued roll-over of humanity as we know it. And I'm not sure we *should*; Humanity as we know it, on the aggregate, is pretty boring ... when it isn't busy being disgusting.

I AM hoping there is some way to augment the best of us with the best of AI, without really destoying either. For example, I would love to have a vastly better and more reliable memory. I just don't want it taking over my sense of agency in the process.

http://www.msn.com/en-us/technology/artificial-intelligence/chatgpt-confidently-loses-chess-match-to-1979-atari-game/ar-AA1HgDRN

During a conversation with ChatGPT about artificial intelligence and chess, ChatGPT touted its skill in the game. The chatbot claimed it could easily beat Video Chess, a 4KB chess game for the 1970s-era Atari VCS. It went… poorly. As described in a LinkedIn post, software engineer Robert Caruso set up an Atari emulator and spent ninety minutes watching as ChatGPT "confused rooks for bishops, missed pawn forks, and repeatedly lost track of where pieces were."

Caruso switched to standard chess notation when ChatGPT complained about the Atari icons, but it continued to have difficulty with board awareness. The chatbot kept asking to start over and try again before eventually giving up. Atari Video Chess was released in 1979, not 1977, as stated in the post, and is, by most accounts, a decent chess engine.

It should be no surprise that even a decades-old chess engine would beat a large language model in chess, despite Caruso declaring it a "stunning victory." A large language model like ChatGPT is not true artificial intelligence; it is more like superpowered autocomplete. Even the most sophisticated models constantly spout nonsense with pure confidence ...(continued)


- - - - -
View Replies (2) »



» You can also:
- - - - -
The above is a reply to the following message:
Re: Do AI Models Think?
By: De_Composed
in GRITZ
Mon, 23 Jun 25 9:11 PM
Msg. 10265 of 10352

fizzy:

Re: “Short answer, per the article, "No".”
I am just six chapters into Mustafa Suleyman's book, "The Coming Wave," but I've already learned enough to say that the author of your article is an idiot. He doesn't seem to know ANYTHING about AI. Here are some things that he doesn't seem to know:

When a computer beat the world chess champion, it did so with brute force. There's nothing intelligent about that. But when a computer - one of Suleyman's computers - became the world's GO champion, it used strategies that no human had ever seen. That's because it had been practicing against another DeepMind computer and had played hundreds of millions of games. The two trained themselves in all sorts of strategies humans hadn't employed or even conceived of in thousands of years. In its second game against the champ, the DeepMind A.I. made a "mistake" that had some in the audience snickering. Then it destroyed the world champ (for the second time).

The A.I. didn't defeat the old champ by using brute force. It had figured some things out.

A second thing your article's author doesn't get is that there are many A.I.s out there, and they have many specialties. An A.I. that is strong on robotic movement is not going to be good at the river crossing puzzle. Claude's strength is text generation, not logic puzzles. But a "general" a.i. is on the way. Give it 3-4 years.

Third, A.I.'s sophistication is growing at great speed by using a technique the author's company invented. He acknowledges that some say that A.I. will hit roadblocks before long. It will not continue advancing at this rate. He finds that funny, though - because his technique is just the FIRST self-learning technique that caught on. He is certain that there will be others as soon as the need arises. Right now, A.I. is advancing so quickly - at thousands of times the rate of Moore's law - that everyone is content.

Now I've thought of a fourth one.

A.I. is not the only up-and-coming technology we need to think about. Biotech is just as important and advancing just as fast. The FUSION of A.I. with biotech spawns new technologies, and that's happening already.

Your author claims nothing good comes from A.I. But does he know that prior to 2022, about 0.1% of protein structures in existence had been mapped? About 190,000. That year, DeepMind uploaded some 200 million structures in one go, representing almost all known proteins. I'd say that's pretty good.

And that's what I mean by the fusion of A.I. with other technologies.

When your author says A.I. "can't solve problems they haven’t been trained on," it makes me scratch my head. A.I. is in its infancy but it's improving at an exponential rate. We don't know what it WILL do, but we already know that it's done a lot and is getting better at a blistering pace.

Does it actually think? No. Not yet. But I'm not as sure as I once was that most people do either. Ultimately, who cares?






« GRITZ Home | Email msg. | Reply to msg. | Post new | Board info. Previous | Home | Next