Author:
Blog About:
Taxonomy upgrade extras:
In certain cases such as medicine, I'd expect Artificial Intelligence to regurgitate provide useful information, although it might provide bad, wrong information as well. In other cases, such as analyzing President Biden's term as president, I would expect word salad.
Read on for why.
Descriptions of how AI works reminds me of physicist Richard Feynman's experience on sabbatical in Brazil, perhaps 60-70 years ago, as described in "Surely You're Joking, Mr. Feynman!"
He taught an advanced electromagnetism class in Brazil, and was discussing light polarization, giving the students strips of polaroid to illustrate the subject. He posed this tricky question, "How could you find the absolute polarization of a single strip?" Two strips will either let through about half the light, or block all light, depending on their polarization relative to each other.
The students didn't know, so he gave this hint: "Look through the windows out over the water." I imagine this: they turn their heads all as one to look, and back to face the professor. They still hadn't an idea. He gave another hint: "Have you heard of Brewster's Angle?" The students gave perfect answers to all his questions. Then the professor directed the students again to look at the water. Still nothing.
Normally, if a student wondered what the heck the water had to do with anything, I would (hopefully) keep in my own mind my sense of the sheer cluelessness. But here, I would view that as the beginning of understanding.
The problem was that it was all outright memorization without comprehension. They'd memorized everything without knowing what anything meant.
I'm concerned that something similar occurs with AI. In certain cases such as medicine, it might regurgitate provide useful information, although it might provide bad, wrong information as well. In other cases, such as analyzing President Biden's term as president, I would expect word salad.
It seems to me that AI learns by slurping in everything it can.