Computer Scientists Induce Schizophrenia in a Neural Network, Causing it to Make Ridiculous Claims
Computer Scientists Induce Schizophrenia in a Neural Network, Causing it to Make Ridiculous Claims
I was asked what I thought of this, so here is my response:
It’s interesting. First you have to get past the hype release layer to the actual study, and the PDF is necessary if you want to be able to tell what really is going on.
In particular, this passage from the U of T press release is absolute garbage, and it is being picked up in every pop-sci reblogging:
“After being re-trained with the elevated learning rate, DISCERN began putting itself at the center of fantastical, delusional stories that incorporated elements from other stories it had been told to recall. In one answer, for instance, DISCERN claimed responsibility for a terrorist bombing.”
Utter bullshit. If you give someone a coin that makes heads “A story about myself” and tails “A story about crime”, then it is not so far fetched to expect that flipping the coin too many times to keep up with would render stories like “Myself committed crime”. Hardly a premeditated decision to accept responsibility for a criminal act. Seriously, OMFG.
What they did do is have schizophrenics and control patients listen to three brief stories and analyze how they retained the information in the stories over time – immediately after they heard them, 45 minutes later, and 7 days later. This was the result of that:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3105006/table/T3/
It shows that the patients have many more errors of the agent-slotting type and derailed clause type. Agent slotting is confusing subject and object – the story says man gives girl flowers, patient recalls girl giving man flowers. Derailed clauses are the more profoundly confused, word salad type of propositions like ‘I remember the generosity of the flowers…”
The neural network system that they are using (called DISCERN) was given a completely different set of 28 stories, half of which were crime stories and half of which were autobiographical. They broke down the kinds of recall errors into different syntactic-semantic metrics and claim that they got significant results when they tweaked the DISCERN parameters toward ‘hyperlearning’.
I don’t want to crap on the study, because it seems like solid, progressive research, and that can only help people who need it, but I do think that the interpretation exaggerates somewhat. Of course the press release is wildly hyped (did I mention that it’s called DISCERN??) The treatment of the derailed clauses, for example, which to me are really the signature of schizophrenic language, are not really evident to me. They cite outputs like “Tony feared Joe (substituting for Vito) and note that “This confusion occurred again in recalling Story 27:” So they observer that stories can get mixed up when you push the system into hyperlearning…which is not unexpected to me at all, but they do not show any clauses like “Rain feared Tony” etc.
Likewise the attribution of subject object switches to fixed delusion seems like an awfully broad, if not clearly invalid leap. Unfortunately when computer science conspires with psychiatry, what we apparently get is a very superficial view of the psyche as producer of symbolic communication. When we feed a computer a jar of peanut butter and a jar of jelly and it comes up with peanut butter and jelly, that is quite a bit different than the computer announcing that peanut butter is Napoleon. I don’t see any indication here of deep simulation of schizophrenia, but the connection between hyperlearning and some symptoms of schizophrenia are certainly worth pursuing. They may indeed have found part of why schizophrenics say some of the things that they say, but I think that it is a misinterpretation to conclude that this supports the idea that consciousness is defined only by information processing. What it supports is that breaking language down into mathematical relations can yield mathematical understandings of disordered language.
Recent Comments