foundations of computational agents
Abduction is a form of reasoning where assumptions are made to explain observations. For example, if an agent were to observe that some light was not working, it hypothesizes what is happening in the world to explain why the light was not working. A tutoring agent could try to explain why a student gives some answer in terms of what the student understands and does not understand.
The term abduction was coined by Peirce (1839–1914) to differentiate this type of reasoning from deduction, which involves determining what logically follows from a set of axioms, and induction, which involves inferring general relationships from examples.
In abduction, an agent hypothesizes what may be true about an observed case. An agent determines what implies its observations – what could be true to make the observations true. Observations are trivially implied by contradictions (as a contradiction logically implies everything), so we want to exclude contradictions from our explanation of the observations.
To formalize abduction, we use the language of Horn clauses and assumables. The system is given:
a knowledge base, , which is a set of of Horn clauses
a set of atoms, called the assumables, which are the building blocks of hypotheses.
Instead of adding observations to the knowledge base, observations must be explained.
A scenario of is a subset of such that is satisfiable. is satisfiable if a model exists in which every element of and every element is true. This happens if no subset of is a conflict of .
An explanation of proposition from is a scenario that, together with , implies .
That is, an explanation of proposition is a set , such that
A minimal explanation of from is an explanation of from such that no strict subset of is also an explanation of from .
Consider the following simplistic knowledge base and assumables for a diagnostic assistant:
If the agent observes , there are two minimal explanations:
These explanations imply and .
If is observed, the minimal explanations are
If was observed, there is one minimal explanation:
The other explanation of is inconsistent with being a non-smoker.
Consider the knowledge base
If is observed, there are two minimal explanations:
If is observed, there is one minimal explanation:
Notice how, when is observed, there is no need to hypothesize to explain ; it has been explained away by .
Determining what is going on inside a system based on observations about the behavior is the problem of diagnosis or recognition. In abductive diagnosis, the agent hypothesizes diseases or malfunctions, as well as that some parts are working normally, to explain the observed symptoms.
This differs from consistency-based diagnosis (CBD) in the following ways:
In CBD, only normal behavior needs to be represented, and the hypotheses are assumptions of normal behavior. In abductive diagnosis, faulty behavior as well as normal behavior needs to be represented, and the assumables need to be for normal behavior and for each fault (or different behavior).
In abductive diagnosis, observations need to be explained. In CBD, observations are added to the knowledge base, and is proved.
Abductive diagnosis requires more detailed modeling and gives more detailed diagnoses, because the knowledge base has to be able to actually prove the observations from the knowledge base and the assumptions. Abductive diagnosis is also used to diagnose systems in which there is no normal behavior. For example, in a tutoring agent, by observing what a student does, the agent can hypothesize what the student understands and does not understand, which can guide the tutoring agent’s actions.
Abduction can also be used for design, in which the query to be explained is a design goal and the assumables are the building blocks of the designs. The explanation is the design. Consistency means that the design is possible. The implication of the design goal means that the design provably achieved the design goal.
Consider the electrical domain of Figure 5.2. Similar to the representation of the example for consistency-based diagnosis in Example 5.21, we axiomatize what follows from the assumptions of what may be happening in the system. In abductive diagnosis, we must axiomatize what follows both from faults and from normality assumptions. For each atom that could be observed, we axiomatize how it could be produced.
A user could observe that is lit or is dark. We must write rules that axiomatize how the system must be to make these true. Light is lit if it is ok and there is power coming in. The light is dark if it is broken or there is no power. The system can assume is ok or broken, but not both:
You can then write rules on how and depend on switch positions, the input to , and assumptions of the status of the wire. Observing that some of the lights are lit gives explanations that can account for the observation.
Both the bottom-up and top-down implementations for assumption-based reasoning with Horn clauses can be used for abduction. The bottom-up algorithm of Figure 5.9 computes the minimal explanations for each atom; at the end of the repeat loop, contains the minimal explanations of each atom (as well as potentially some non-minimal explanations). The refinement of pruning dominated explanations can also be used. The top-down algorithm can be used to find the explanations of any by first generating the conflicts and, using the same code and knowledge base, proving instead of . The minimal explanations of are the minimal sets of assumables collected to prove such that no subset is a conflict.