The fallacy of accidental knowledge in AI

Ok, so I want to propose a new fallacy in the way people judge artificial intelligent agents: the fallacy of accidental knowledge. This fallacy is basically about misjudging the nature of knowledge: assuming some kind of knowledge to be fundamental to cognition, when, in reality, it is just learned knowledge, acquired through the accidents of the autobiography of a human.

This fallacy is an error in evaluating the strengths and weaknesses of an AI. It happens when an AI system models a domain which is too familiar to the human which is evaluating it. The AI makes a mistake easily detectable by the human. The human judge then draws general conclusions about the ways in which AI systems in general work, which usually include statements about how AIs will never learn to perform commonsense reasoning.

The fallacy here is based on the fact that many of the commonsense knowledge used by humans have been acquired through an anecdotal form, or real world situations amounting to anecdotes.

The mistake made by the AI means that it had not yet been presented with the appropriate anecdotes, and it does not say anything about its reasoning powers. The problem with the fallacy of the anecdotal knowledge is that it forces AI developers to look for deep, systemic solutions, instead of simply providing the AI with the missing anecdotal knowledge.

My recent personal experience with Xapagy: the paper presented at AGI-14 has several examples of reasoning about the outcome of the fight between Achilles and Hector, based on its experience with previous fights it witnessed. And indeed, the agent predicts that Achilles will kill Hector.

Ok, so at this point I was wondering what Achilles will do next. So I decided to run the continuations beyond the death of Hector. Well, the next event predicted by the agent was that Hector will strike Achilles with his sword.

Stupid system! Didn’t it say, just in the previous sentence, that Hector is killed? Well, yes, but with the given autobiography, the agent had no way to know that dead people don’t continue to fight. This is not a trivial thing: children take a long time to learn what death properly means, and it is not quite clear what personal experiences are sufficient for correct inference in this case.