foundations of computational agents
The explicit use of semantics allows explanation and debugging at the knowledge level. To make a system usable by people, the system cannot just give an answer and expect the user to believe it. Consider the case of a system advising doctors who are legally responsible for the treatment that they carry out based on the diagnosis. The doctors must be convinced that the diagnosis is appropriate. The system must be able to justify that its answer is correct. The same mechanism can be used to explain how the system found a result and to debug the knowledge base; a good explanation should convince someone there are no bugs.
Knowledge-level debugging is the process of finding errors in knowledge bases with reference only to what the symbols mean and what is true in the world, not the reasoning steps.
Three types of non-syntactic errors arise in rule-based systems:
An incorrect answer is produced; that is, some atom that is false in the intended interpretation was derived.
An answer that was not produced; that is, the proof failed on a particular true atom when it should have succeeded.
The program gets into an infinite loop. These can be handled for the top-down proof procedure in a similar way to cycle pruning, but where only the selected atoms need to be checked for cycles, and not the whole answer clause. The bottom-up proof procedure never gets into an infinite loop.
Ways to debug the first two of these types of error are examined below.
An incorrect answer, or false-positive error, is an answer that has been proved yet is false in the intended interpretation. An incorrect answer is only produced by a sound proof procedure if a false clause was used in the proof. The aim is to find a false clause from an incorrect answer.
Suppose atom was proved yet is false in the intended interpretation. There must be a clause in the knowledge base that was used to prove . Either all of the are true, in which case the buggy clause has been found, or one of the is false. This can be debugged in the same way.
This leads to an algorithm, presented in Figure 5.6 to debug false positives. It can find a false clause in a knowledge base when an atom that is false in the intended interpretation is derived. It only requires the person debugging the knowledge base to be able to answer true–false questions.
Consider Example 5.8, involving the electrical domain, but assume there is a bug in the knowledge base. Suppose that the domain expert or user had inadvertently said that whether is connected to depends on the status of instead of (see Figure 5.2). Thus, the knowledge includes the following incorrect rule:
instead of the rule with . All of the other axioms are the same as in Example 5.8. The atom can be derived, which is false in the intended interpretation.
The atom was derived using the following rule:
The atoms and are true in the intended interpretation, but is false in the intended interpretation. The rule used to derive this atom is
The atom is false in the intended interpretation. It was proved using the clause
The atom is false in the intended interpretation, and was proved using the clause
Both elements of the body are true in the intended interpretation, so this is a buggy rule.
The second type of error occurs when an expected answer is not produced. This manifests itself by a failure when an answer is expected. An atom that is true in the domain, but is not a consequence of the knowledge base, is a false-negative error. The preceding algorithm does not work in this case; there is no proof.
An appropriate answer is not produced only if a definite clause or clauses are missing from the knowledge base. By knowing the intended interpretation of the symbols and by knowing what queries should succeed (i.e., what is true in the intended interpretation), a domain expert can debug a missing answer. Figure 5.7 shows how to debug false negatives. Given , a true atom for which there is no proof, it returns an atom for which there is a missing clause (or clauses).
It searches the space of plausible proofs until it finds an atom where there is no appropriate clause in the knowledge base.
Suppose that, for the axiomatization of the electrical domain in Example 5.8, the world of Figure 5.2 actually had down. Thus, it is missing the definite clause specifying that is down. The axiomatization of Example 5.8 fails to prove when it should succeed. Consider how to find the bug.
There is one clause with in the head:
All of the elements of the body are true. The atoms and can both be proved, but fails, so the algorithm recursively debugs this atom. There is one rule with in the head:
The atom is true in the intended interpretation and cannot be proved. The clauses for are
The user can determine that the body of the second rule is true. There is a proof for . There are no clauses for , so this atom is returned. The correction is to add an appropriate clause, by stating it as a fact or providing a rule for it.