Coincidence? I think not. Admissible heuristics in A* search and human cognitive biases

I was always wondered whether anybody made this parallel before. I am sure that some people had made it, but I couldn’t find anything on the web, so I might as well as write it up.

Part 1: A* search (for non-technical people)

A* is a search algorithm used in artificial intelligence and robotics. It is a way to search for solutions to a problem. One can of course, find solutions by randomly trying out stuff (1) or by methodically trying out everything (2). What A* does is that it tries to use some knowledge about how close we are to a solution – this is called a heuristic. Basically, try to imagine that the heuristic is playing a hot-cold game: as you search, it tells you “freezing”, “cold”, “getting warmer”, “hot!”.

Now, of course, if you would genuinely know the exact distance to the solution, we don’t even need to search, just walk there directly. So the heuristic is normally just an approximation. We would assume that the closer the heuristic to reality, the better for the search, but it turns out that things are more bizarre than that. It is provable that the good heuristics are the ones that underestimate the distance to the solution, that is, they are optimistic (3). These kind of heuristics will tell you “warm”, when it is merely “cold”, and “hot!” when it is merely “warm”. Even a heuristic which always yells “hot!” (4) is still better (5) than one that approximates better, but from the pessimistic. Note that this is a formally provable result.

How do we create such heuristics? Most of the time what we do is take an original problem and (a) ignore some of the difficulties of the problem such as assume that there are no traffic jams (6) or (b) attribute superpowers to ourselves.

Part 2: Some cognitive biases

Ok, here I will need to rely mostly on our good friend Wikipedia. Basically, a cognitive bias is a human reasoning pattern which psychologists believe to be “irrational” or “illogical”. Here are some examples:

  • The planning fallacy, first proposed by Daniel Kahneman and Amos Tversky in 1979, is a phenomenon in which predictions about how much time will be needed to complete a future task display an optimism bias (underestimate the time needed).
  • The optimism bias (also known as unrealistic or comparative optimism) is a cognitive bias that causes a person to believe that they are less at risk of experiencing a negative event compared to others.
  • The illusion of control is the tendency for people to overestimate their ability to control events; for example, it occurs when someone feels a sense of control over outcomes that they demonstrably do not influence.
  • Illusory superiority is a cognitive bias whereby individuals overestimate their own qualities and abilities, relative to others. This is evident in a variety of areas including, performance on tasks or tests, and the possession of desirable characteristics or personality traits.


So, is this a coincidence or not? Well, it hinges on whether the human problem solving style is anything similar to A* search. We are certainly very bad in systematically searching for something, we are bad in backtracking, and everybody loves the hot-cold game.

(1) stochastic search
(2) uniform cost search, for instance
(3) admissible heuristics
(4) h(x) = 0
(5) what does “better” mean in this context is a bit more complicated. Let us say that if the heuristic is pessimistic, you will probably not find the best solution.
(6) Problem relaxation