Discussion about this post

User's avatar
Herbert Roitblat's avatar

As you note, these results do not mean what the headlines would suggest.

I want to talk about one of your points a bit more. Affirming the consequent is a logical fallacy. Comparing the performance of any model against a benchmark does not provide evidence about the means by which that performance was achieved. The cookies being missing from the cookie jar does not mean that Joey took them. Playing championship chess does not mean that the computer has achieved artificial general intelligence. Seeing a shape that resembles a face on Mars does not mean that that face was drawn by Martians.

As you said: “A system that gets the right answer for the wrong reason is not a system you can trust.”

I believe that AGI is achievable, but language modeling is not sufficient to achieve it. It is language modeling, not intelligence modeling. For example, I could learn to recite a speech in a foreign language that I do not understand. That speech may, in fact, be brilliant, but that does not indicate that I am brilliant. Simulating intelligence is easier than implementing intelligence.

Expand full comment

No posts