JustPaste.it

The Use of Inference Engines in Artificial Intelligence Applications

We are living in an exciting era with regards to computer capabilities. New techniques in computer software design enable intelligent systems to deal with problems that they have not been specifically programmed to handle.

Advances in artificial intelligence (AI) are increasing the presence of computationally-based cognition in practical areas such as business use and in education settings. Systems developed using AI, ontologies and inference engines learn what to do as they are exposed to new problems.

In traditional approaches to programming computer systems , software applications are designed to deal with a specific and abstracted problem, bounded by a set of known constraints. When new situations that the software must support are discovered, the software must be modified. An example would be a program that identifies Rolex watches. This type of system would not be able to identify and would likely mis-categorize a counterfeit model that it had not been explicitly programmed to recognize.

Intelligent computer-based applications that drive decision-making processes frequently fall within two primary categories.

Forward chaining starts with an initial set of facts and using rules it works towards a goal. Sample applications of this technique would include finding a way through an unfamiliar maze or playing a game.

In backwards chaining, the goal is known but the path to it from the starting point is not and the system must discover and describe the route to the end point. These systems can answer “how do I ...” types of questions such as coming up with the best route to a known destination or confirming that an object has been or would be classified correctly in a group by testing the object’s attributes against those of the group.

Both approaches make use of an inference engine acting upon a knowledge base within the framework, also called an ontology. The engine is the component of the system that applies the rules that test the unconfirmed hypotheses against the knowledge in the ontology.

Inference-based engines are prone to shortcomings stemming from ambiguities and other descriptive imperfections that even humans struggle with. Assigning objects to categories that contain exceptions and contradictions can cause problems. For example, defining a fish as having fins and living in the water works fine until the system is presented with a dolphin to categorize.

Other types of issues with inference methods result from making logical derivative-types of assumptions given a set of facts. For instance, the wind is blowing over the meadow; therefore the grass blades, dust and tree leaves are being blown naturally follow in human common-sense reasoning but create difficulties in computer models that have not been explicitly programmed to make this type of generalization.

Enterra’s Rule-Based Inference System has advanced technology that addresses these challenges. As a result, it more closely mimics human reasoning than traditional inference-based approaches.

Michel Smith is the author of this article. For further detail about Inference Engine please visit the website.