Archive for the ‘Artificial intelligence’ Category

A modest proposal to change the notation of Boolean algebra

Saturday, October 8th, 2016

It is always fun to explain people what “and” and “or” mean in Boolean algebra. Of how cool it is that they don’t mean the same thing as in English. Trying to pretend that while their meaning in English is unclear (it is not), in Boolean algebra they are well defined. Trying to imply that the world would be a better place, if only people would use “and” and “or” in their daily life in the Boolean algebra semantics.

Well, ok. Maybe we can make a proposal of changing English to suit Boolean algebra. Or, maybe here is a more modest proposal: let us change the Boolean algebra notation to match the English better:


A or B ---> A and/or B
A and B ---> both A and B
A xor B ---> A or B
A -> B ---> if A then surely B, (but it can also be B if not A)

Coincidence? I think not. Admissible heuristics in A* search and human cognitive biases

Friday, October 7th, 2016

I was always wondered whether anybody made this parallel before. I am sure that some people had made it, but I couldn’t find anything on the web, so I might as well as write it up.

Part 1: A* search (for non-technical people)

A* is a search algorithm used in artificial intelligence and robotics. It is a way to search for solutions to a problem. One can of course, find solutions by randomly trying out stuff (1) or by methodically trying out everything (2). What A* does is that it tries to use some knowledge about how close we are to a solution – this is called a heuristic. Basically, try to imagine that the heuristic is playing a hot-cold game: as you search, it tells you “freezing”, “cold”, “getting warmer”, “hot!”.

Now, of course, if you would genuinely know the exact distance to the solution, we don’t even need to search, just walk there directly. So the heuristic is normally just an approximation. We would assume that the closer the heuristic to reality, the better for the search, but it turns out that things are more bizarre than that. It is provable that the good heuristics are the ones that underestimate the distance to the solution, that is, they are optimistic (3). These kind of heuristics will tell you “warm”, when it is merely “cold”, and “hot!” when it is merely “warm”. Even a heuristic which always yells “hot!” (4) is still better (5) than one that approximates better, but from the pessimistic. Note that this is a formally provable result.

How do we create such heuristics? Most of the time what we do is take an original problem and (a) ignore some of the difficulties of the problem such as assume that there are no traffic jams (6) or (b) attribute superpowers to ourselves.

Part 2: Some cognitive biases

Ok, here I will need to rely mostly on our good friend Wikipedia. Basically, a cognitive bias is a human reasoning pattern which psychologists believe to be “irrational” or “illogical”. Here are some examples:

  • The planning fallacy, first proposed by Daniel Kahneman and Amos Tversky in 1979, is a phenomenon in which predictions about how much time will be needed to complete a future task display an optimism bias (underestimate the time needed).
  • The optimism bias (also known as unrealistic or comparative optimism) is a cognitive bias that causes a person to believe that they are less at risk of experiencing a negative event compared to others.
  • The illusion of control is the tendency for people to overestimate their ability to control events; for example, it occurs when someone feels a sense of control over outcomes that they demonstrably do not influence.
  • Illusory superiority is a cognitive bias whereby individuals overestimate their own qualities and abilities, relative to others. This is evident in a variety of areas including, performance on tasks or tests, and the possession of desirable characteristics or personality traits.

Discussion

So, is this a coincidence or not? Well, it hinges on whether the human problem solving style is anything similar to A* search. We are certainly very bad in systematically searching for something, we are bad in backtracking, and everybody loves the hot-cold game.


Notes:
(1) stochastic search
(2) uniform cost search, for instance
(3) admissible heuristics
(4) h(x) = 0
(5) what does “better” mean in this context is a bit more complicated. Let us say that if the heuristic is pessimistic, you will probably not find the best solution.
(6) Problem relaxation

The fallacy of accidental knowledge in AI

Sunday, August 17th, 2014

Ok, so I want to propose a new fallacy in the way people judge artificial intelligent agents: the fallacy of accidental knowledge. This fallacy is basically about misjudging the nature of knowledge: assuming some kind of knowledge to be fundamental to cognition, when, in reality, it is just learned knowledge, acquired through the accidents of the autobiography of a human.

This fallacy is an error in evaluating the strengths and weaknesses of an AI. It happens when an AI system models a domain which is too familiar to the human which is evaluating it. The AI makes a mistake easily detectable by the human. The human judge then draws general conclusions about the ways in which AI systems in general work, which usually include statements about how AIs will never learn to perform commonsense reasoning.

The fallacy here is based on the fact that many of the commonsense knowledge used by humans have been acquired through an anecdotal form, or real world situations amounting to anecdotes.

The mistake made by the AI means that it had not yet been presented with the appropriate anecdotes, and it does not say anything about its reasoning powers. The problem with the fallacy of the anecdotal knowledge is that it forces AI developers to look for deep, systemic solutions, instead of simply providing the AI with the missing anecdotal knowledge.

My recent personal experience with Xapagy: the paper presented at AGI-14 has several examples of reasoning about the outcome of the fight between Achilles and Hector, based on its experience with previous fights it witnessed. And indeed, the agent predicts that Achilles will kill Hector.

Ok, so at this point I was wondering what Achilles will do next. So I decided to run the continuations beyond the death of Hector. Well, the next event predicted by the agent was that Hector will strike Achilles with his sword.

Stupid system! Didn’t it say, just in the previous sentence, that Hector is killed? Well, yes, but with the given autobiography, the agent had no way to know that dead people don’t continue to fight. This is not a trivial thing: children take a long time to learn what death properly means, and it is not quite clear what personal experiences are sufficient for correct inference in this case.

 

New Scientist article about Xapagy

Tuesday, December 11th, 2012

New Scientist had published a short article about Xapagy, focusing mostly on the story generation aspect:

http://www.newscientist.com/article/mg21628945.700-storytelling-software-learns-how-to-tell-a-good-tale.html

It is a good article for general reading, and I am quite comfortable with it. It was based on last year’s crop of tech reports of I had uploaded to arXiv. Since then, the Xapagy work had been more focused on representation of tricky sentences and story segments, like “If Clinton was the Titanic, the iceberg would have sunk” and the like.

No, I still don’t have synthetic autobiographies of sufficient size to start doing really interesting stuff – like creating whole stories from scratch. But slowly, slowly, it is getting to the point that one can translate almost anything to Xapi.

The importance of semantics in natural language understanding

Tuesday, April 13th, 2010

’Twas brillig, and the slithy toves
Did gyre and gimble in the wabe;
All mimsy were the borogoves,
And the mome raths outgrabe.

Q: What was the time of the day?

A: It was brillig.

Q: What were the toves doing?

A: They were gyring and gimbling.

Q: Were the borogoves slithy?

A: I don’t know. They were certainly mimsy.

Take this, Cyc.

Pick your science fiction idea here: Simulation

Sunday, August 9th, 2009

Some notes I had written previously about William Gibson’s book Idoru: how comes that in so many books and, especially, movies people assume that the computers of the future will have three dimensional interfaces which we will try to manipulate the way we are currently manipulating our physical environment?

As it happens, every time we try to implement a three dimensional interface, we fail in a most miserable way. At the same time, our user interfaces have standardized on the overlapping windows, menus, buttons approach – and this will not change in the foreseeable future.

Idea for science fiction authors: we are a simulation on somebody’s computer. Our attempts to build computers are just an incremental attempt to simulate the computer on which we ourselves are simulated. The fact that we are converging towards a windowing system only shows that our underlying OS is also windowing based. We are simulated in a future version of Windows! Bugs introduced today might be still present in the future version. Apocalyptic scenarios involving time travel and applying patches to operating systems in the future which simulate their own past ensues.

For the film version, this idea can be developed with appropriate amount of romantic complications, car chases, expensive computer graphics etc.

User modeling (was: stupid word processors)

Monday, March 23rd, 2009

Was editing an exam in OpenOffice, and I had to make a table with headings showing resources: r1, r2, r3… As I was typing them in, OpenOffice writer was happily capitalizing them behind me: R1,R2, R3… As this was incorrect, I had to go back and change it back to r1, r2, r3… And OO was capitalizing them again: R1, R2, R3… I had to go through some significant acrobatics to let it leave where as it was (exiting the cell downwards, rather than leftwards, and weird stuff like that).

Now, two issues:

  • Apparently the OpenOffice background processor can not figure out that a word like r1 is probably not a regular lexical word subject to capitalization. ‘Cause English words do come with numbers in them. But this is the least problem.
  • It seems that the OpenOffice system does not have a minimal model of the user. It only knows about the document (BTW, Microsoft Word is just like that). Well, if you are automatically doing things to the documents, like these programs do, then you probably see document editing, where you are trying to help the user achieve what it wants. If that is what you really want, then probably the first rule is: “If you have done a change, and the user had gone back and reverted that change right away, then probably the user wants it like that, so do not change it back again“. What this means, though,  is that you need a user model as well as a document model, and in this case, the document model is overridden by the user model. Now, by the way, implementing this particular thing would be an afternoon’s work, if somebody wants to do it right – eg. after I have fixed the first R1 –> r1, the system might guess that I don’t want it to mess with r2 in the next cell.

Now, I know that the OpenOffice guys have limited resources, but Microsoft???

Pinocchio and Thomas Acquinas

Wednesday, March 18th, 2009

It came to pass that in 1270 the students of the University of Paris were gathering to hear a lecture on Aristotle by the famous Doctor Angelicus. How surprised were they, when they were told that they will be presented a virtual lecture: the lecturer will be represented by an avatar (a wood doll with a long nose), and the text delivered by a ventriloquist.

When asked in an exit survey whether they thought that the delivery mode improved on the instruction (yes / somewhat / limited) and whether they could understand the ventriloquist (yes / somewhat / limited), many of them sneaked out through alternative exits.

The ones who showed up for the next lecture had a real interest in puppetry. The ones interested in Aristotle went somewhere else.

Consciousness, qualia, and a creature with an exploding brain

Thursday, August 14th, 2008

I don’t understand all this mystery talk surrounding consciousness and qualia (which appears to go on forever in certain artificial intelligence circles). I think that there are very satisfactory technical definitions for both of them.

Consciousness: the instantaneous state of the mind, including qualia (internal and external perceptions), and reflections. We assume that these are encoded in neural firing patterns, but we shouldn’t forget about the nerve input from various body parts and sensors, as well as the brain configuration and neurotransmitter levels which make a certain firing pattern possible.

Qualia: the part of the conscious state dealing with an external perception. I assume that qualia has fuzzy borders, and the remainder of the conscious state can be, to some level part of it.

Reflections: any part of the consciousness which does not deal with direct perception. It can deal with: past qualia, past reflections, future plans, it can try to envision qualia or perceptions which it had not encountered yet etc.

Why does qualia appear mysterious: because we are not able to reproduce it completely. A reflection about a past qualia can not bring the complete qualia in mind (for obvious reasons). Examples of obvious reasons: we would need to reproduce the external sensory inputs, the neurotransmitter levels, etc.

Why does consciousness appear mysterious: because we are not able to reproduce it completely.

Another issue her is that when we say that we want to “understand” qualia / consciousness, we do not really mean that we want to reproduce a past qualia / conscious state. In fact, reproduction, with some level of approximation, is possible. Eg. I can experience the qualia of eating an apple when hungry. It is harder to reproduce the experience of seeing the Grand Canyon might be difficult to reproduce the second time, as the reflections about past visits will be part of the conscious state.

So the statement that qualia and consciousness are mysterious are simply the (correct) fact that reflections about a qualia are not the brain state as the original qualia. There is the practical obstacle of the fixed wiring of the human brain: for instance, neural patterns in the first level of sensory data processing levels (eg primary visual cortex) are part of the visual qualia, and probably cannot be “borrowed” for a reflection.

I think that the only kind of being who can successfully reflect on its own consciousness is one which (a) has a dynamically reconfigurable brain and (b) has an exponentially expanding brain which at any given moment of time contains its complete previous conscious state as a reflection in the new conscious state.

Now, we can try to visualize for ourselves this creature with the exploding brain, and decide whether we want to be like him/her.

What is it like to be a bat (or Britney Spears, or me, yesterday)?

Sunday, August 10th, 2008

I was re-reading the classic Thomas Nagel paper “What is it like to be a bat?”. First, of course, one need to accept the premise that there is something like “subjective consciousness”. But, let us take this premise and run with it.

What Nagel is arguing is that there is no way for me, a human, to know what is it like to be a bat, because we cannot recreate the experiences of a bat. We have a different brain and body structure, we do not have a wing, we do not have a sonar, and so on.

I think that he is right, but he is missing the real gap.

What about trying to understand what is it like to be Britney Spears? I don’t have her gender, age, experiences. One might claim that structurally I am closer to Britney than to a bat, so maybe my understanding of how it is to be her might be “closer” (provided I can create a distance measure on such a thing).

But now an easy one (for me). What was it like to be me, this morning? The facts are there: the sun was shining, and I was having a headache. My perceptions of the external world and the internal world created something which Nagel would call “experience”. Right now, the headache is gone and it is nighttime. I can describe my feelings this morning, verbally, but I can not trick my mind to feel a headache or my eyes to see sunlight.

I do not know how it was to be me this morning. The gap between the actual moment of experience, and the attempt to reproduce it later is much larger, than the gap between my experiences, Britney’s experiences, or the bat’s.

PS: Of course, I know what it is for me to finish writing this blog entry. But wait… it is gone.