For
computer scientists, concepts are expressed as labels for data. Historically, the need for ontology alignment arose out of the need to
integrate heterogeneous
databases, ones developed independently and thus each having their own data vocabulary. In the
Semantic Web context involving many actors providing their own
ontologies, ontology matching has taken a critical place for helping heterogeneous resources to interoperate. Ontology alignment tools find classes of data that are
semantically equivalent, for example, "truck" and "lorry". The classes are not necessarily logically identical. According to Euzenat and Shvaiko (2007), there are three major dimensions for similarity: syntactic, external, and semantic. Coincidentally, they roughly correspond to the dimensions identified by Cognitive Scientists below. A number of tools and frameworks have been developed for aligning ontologies, some with inspiration from Cognitive Science and some independently. Ontology alignment tools have generally been developed to operate on
database schemas,
XML schemas,
taxonomies,
formal languages,
entity-relationship models,
dictionaries, and other label frameworks. They are usually converted to a graph representation before being matched. Since the emergence of the Semantic Web, such graphs can be represented in the
Resource Description Framework line of languages by triples of the form , as illustrated in the
Notation 3 syntax. In this context, aligning ontologies is sometimes referred to as "ontology matching". The problem of Ontology Alignment has been tackled recently by trying to compute matching first and mapping (based on the matching) in an automatic fashion. Systems like
DSSim, X-SOM or COMA++ obtained at the moment very high
precision and recall. The Ontology Alignment Evaluation Initiative aims to evaluate, compare and improve the different approaches.
Formal definition Given two ontologies i=\langle C_{i}, R_{i}, I_{i}, T_{i}, V_{i}\rangle and j=\langle C_{j}, R_{j}, I_{j}, T_{j}, V_{j}\rangle where C is the set of classes, R is the set of relations, I is the set of individuals, T is the set of data types, and V is the set of values, we can define different types of (inter-ontology) relationships. Such relationships will be called, all together, alignments and can be categorized among different dimensions: • similarity vs logic: this is the difference between matchings (predicating about the
similarity of ontology terms), and mappings (
logical axioms, typically expressing
logical equivalence or inclusion among ontology terms) • atomic vs complex: whether the alignments we considered are
one-to-one, or can involve more terms in a query-like formulation (e.g.,
LAV/GAV mapping) • homogeneous vs heterogeneous: do the alignments predicate on terms of the same type (e.g., classes are related only to classes, individuals to individuals, etc.) or we allow heterogeneity in the relationship? • type of alignment: the semantics associated to an alignment. It can be
subsumption,
equivalence,
disjointness,
part-of or any user-specified relationship. Subsumption, atomic, homogeneous alignments are the building blocks to obtain richer alignments, and have a well defined semantics in every Description Logic. Let's now introduce more formally ontology matching and mapping. An atomic homogeneous
matching is an alignment that carries a similarity degree s\in [0,1], describing the similarity of two terms of the input ontologies i and j. Matching can be either
computed, by means of heuristic algorithms, or
inferred from other matchings. Formally we can say that, a matching is a quadruple m=\langle id, t_{i}, t_{j}, s\rangle, where t_{i} and t_{j} are homogeneous ontology terms, s is the similarity degree of m. A (subsumption, homogeneous, atomic) mapping is defined as a pair \mu=\langle t_{i}, t_{j}\rangle, where t_{i} and t_{j} are homogeneous ontology terms. ==Cognitive science==