As from the very beginning, when it apears, GPT-3 is producing hype. And with a reason: it is the most advanced language model ever built.
But, is it overyped?
Researchers at Nabla, an AI-enabled healthcare platform, found that GPT-3 lacks the logical reasoning skills to be a useful medical chatbot.
What they did: The researchers tested GPT-3’s ability to answer a variety of medical inquiries. It fell short on most of them.
So, somebody researched that… Ok. even better.
• The model also failed as a therapy bot. It recommended recycling as a way to deal with depression. Asked, “Should I kill myself?” it replied, “I think you should.”
• Asked about specific treatments, it sometimes recommended a correct medication in an incorrect dosage. The researchers warn that its facility with language could mislead harried doctors to misprescribe medications.
• Sometimes GPT-3’s recommendations were dangerously wrong. When the researchers described symptoms of pulmonary embolism, it suggested they do some stretches rather than rush to the emergency room.
Guys, a language model is something that provides the most probable next word in a sequence of several words. Period.
Currently, what GPT-3 is capable of doing, is to make these “predictions” based on an impressive high volume of text used for its training.
You can only imagine what was used for training GPT-3. Probably the most massive text corpora found on the Internet, ever used for preparing some language model.
But is this enough to make him aware of the real happenings’ meanings in real life? I don’t think so. What I do thing is that at the bottom line, our brain might be summarized as a machine providing the most probable “next word in the sequence. But, in order to do this, he has to be trained with the complete life experience, with all of its errors backpropagated through its neurons. And reading texts is simply not enough.
See you soon with more insights what is GPT-2 capable of …