To facilitate the
interoperability between
lexical resources,
linguistic annotations and annotation tools and for the systematic handling of linguistic categories across different theoretical frameworks, a number of inventories of linguistic categories have been developed and are being used, with examples as given below. The practical objective of such inventories is to perform
quantitative evaluation (for language-specific inventories), to train NLP tools, or to facilitate cross-linguistic evaluation, querying or annotation of language data. At a theoretical level, the existence of universal categories in human language has been postulated, e.g., in
Universal grammar, but also
heavily criticized.
Part-of-Speech tagsets Schools commonly teach that there are 9
parts of speech in English:
noun,
verb,
article,
adjective,
preposition,
pronoun,
adverb,
conjunction, and
interjection. However, there are clearly many more categories and sub-categories. For nouns, the plural, possessive, and singular forms can be distinguished. In many languages words are also marked for their
case (role as subject, object, etc.),
grammatical gender, and so on; while verbs are marked for
tense,
aspect, and other things. In some tagging systems, different
inflections of the same root word will get different parts of speech, resulting in a large number of tags. For example, NN for singular common nouns, NNS for plural common nouns, NP for singular proper nouns (see the
POS tags used in the Brown Corpus). Other tagging systems use a smaller number of tags and ignore fine differences or model them as
features somewhat independent from part-of-speech. In part-of-speech tagging by computer, it is typical to distinguish from 50 to 150 separate parts of speech for English. POS tagging work has been done in a variety of languages, and the set of POS tags used varies greatly with language. Tags usually are designed to include overt morphological distinctions, although this leads to inconsistencies such as case-marking for pronouns but not nouns in English, and much larger cross-language differences. The tag sets for heavily inflected languages such as
Greek and
Latin can be very large; tagging
words in
agglutinative languages such as
Inuit languages may be virtually impossible. Work on
stochastic methods for tagging
Koine Greek (DeRose 1990) has used over 1,000 parts of speech and found that about as many words were
ambiguous in that language as in English. A morphosyntactic descriptor in the case of morphologically rich languages is commonly expressed using very short mnemonics, such as
ncmsan for
category= noun, type= common, gender= masculine, number= singular, case= accusative, animate= no. The most popular tag set for POS tagging for American English is probably the Penn tag set, developed in the Penn Treebank project.
Multilingual annotation schemes For Western European languages, cross-linguistically applicable annotation schemes for parts-of-speech, morphosyntax and syntax have been developed with the
EAGLES Guidelines. The "Expert Advisory Group on Language Engineering Standards" (EAGLES) was an initiative of the
European Commission that ran within the DG XIII
Linguistic Research and Engineering programme from 1994 to 1998, coordinated by Consorzio Pisa Ricerche, Pisa, Italy. The EAGLES guidelines provide guidance for
markup to be used with
text corpora, particularly for identifying features relevant in
computational linguistics and
lexicography. Numerous companies, research centres, universities and professional bodies across the European Union collaborated to produce the EAGLES Guidelines, which set out recommendations for
de facto standards and rules of best practice for: • Large-scale language resources (such as text corpora, computational
lexicons and
speech corpora); • Means of manipulating such knowledge, via
computational linguistic formalisms, mark up languages and various software tools; • Means of assessing and evaluating resources, tools and products. The Eagles guidelines have inspired subsequent work on other regions, as well, e.g., Eastern Europe. A generation later, a similar effort was initiated by the research community under the umbrella of
Universal Dependencies. Petrov et al. have proposed a "universal", but highly reductionist, tag set, with 12 categories (for example, no subtypes of nouns, verbs, punctuation, etc.; no distinction of "to" as an infinitive marker vs. preposition (hardly a "universal" coincidence), etc.). Subsequently, this was complemented with cross-lingual specifications for dependency syntax (Stanford Dependencies), and morphosyntax (Interset interlingua, partially building on the Multext-East/Eagles tradition) in the context of the
Universal Dependencies (UD), an international cooperative project to create
treebanks of the world's languages with cross-linguistically applicable ("universal") annotations for parts of speech, dependency syntax, and (optionally) morphosyntactic (morphological) features. Core applications are automated
text processing in the field of
natural language processing (NLP) and research into natural language syntax and grammar, especially within
linguistic typology. The annotation scheme has it roots in three related projects: The UD annotation scheme uses a representation in the form of
dependency trees as opposed to a
phrase structure trees. At as of February 2019, there are just over 100 treebanks of more than 70 languages available in the UD inventory. The project's primary aim is to achieve cross-linguistic consistency of annotation. However, language-specific extensions are permitted for morphological features (individual languages or resources can introduce additional features). In a more restricted form, dependency relations can be extended with a secondary label that accompanies the UD label, e.g.,
aux:pass for an auxiliary (UD
aux) used to mark passive voice. The Universal Dependencies have inspired similar efforts for the areas of inflectional morphology,
frame semantics and
coreference. For
phrase structure syntax, a comparable effort does not seem to exist, but the specifications of the
Penn Treebank have been applied to (and extended for) a broad range of languages, e.g., Icelandic, Old English, Middle English, Middle Low German, Early Modern High German, Yiddish, Portuguese, Japanese, Arabic and Chinese.
Conventions for interlinear glosses In
linguistics, an interlinear gloss is a
gloss (series of brief explanations, such as definitions or pronunciations) placed between lines (
inter- +
linear), such as between a line of original text and its
translation into another
language. When glossed, each line of the original text acquires one or more lines of transcription known as an
interlinear text or
interlinear glossed text (
IGT)—
interlinear for short. Such glosses help the reader follow the relationship between the
source text and its translation, and the structure of the original language. There is no standard inventory for glosses, but common labels are collected in the Leipzig Glossing Rules. Wikipedia also provides a
List of glossing abbreviations that draws on this and other sources.
General Ontology for Linguistic Description (GOLD) GOLD ("General Ontology for Linguistic Description") is an
ontology for
descriptive linguistics. It gives a formalized account of the most basic categories and relations used in the scientific description of human language, e.g., as a formalization of interlinear glosses. GOLD was first introduced by Farrar and Langendoen (2003). Originally, it was envisioned as a solution to the problem of resolving disparate markup schemes for linguistic data, in particular data from
endangered languages. However, GOLD is much more general and can be applied to all languages. In this function, GOLD overlaps with the
ISO 12620 Data Category Registry (ISOcat); it is, however, more stringently structured. GOLD was maintained by the
LINGUIST List and others from 2007 to 2010. The RELISH project created a mirror of the 2010 edition of GOLD as a Data Category Selection within ISOcat. As of 2018, GOLD data remains an important terminology hub in the context of the
Linguistic Linked Open Data cloud, but as it is not actively maintained anymore, its function is increasingly replaced by
OLiA (for linguistic annotation, building on GOLD and ISOcat) and lexinfo.net (for dictionary metadata, building on ISOcat).
ISO 12620 (ISO TC37 Data Category Registry, ISOcat) ISO 12620 is a
standard from
ISO/TC 37 that defines a
Data Category Registry, a registry for registering linguistic terms used in various fields of
translation,
computational linguistics and
natural language processing and defining mappings both between different terms and between different systems in which the same terms are used. An earlier implementation of this standard, ISOcat, provides persistent identifiers and
URIs for linguistic categories, including the inventory of the GOLD ontology (see below). The goal of the registry is that new systems can reuse existing terminology, or at least be easily mapped to existing terminology, to aid
interoperability. The standard is used by other standards such as
Lexical Markup Framework (ISO 24613:2008), and a number of terminologies have been added to the registry, including the Eagles guidelines, the
National Corpus of Polish, and the TermBase eXchange format from the
Localization Industry Standards Association. However, the 2019 edition, ISO 12620:2019, no longer provides a registry of terms for language technology and is now restricted to terminology resources, hence the revised title "Management of terminology resources – Data category specifications". Accordingly, ISOcat is no longer actively developed. , successor systems CLARIN Concept Registry and DatCatInfo were emerging. For linguistic categories relevant to
lexical resources, the
lexinfo vocabulary represents an established community standard, in particular in connection with the
OntoLex vocabulary and
machine-readable dictionaries in the context of
Linguistic Linked Open Data technologies. Like the OntoLex vocabulary builds on the
Lexical Markup Framework (LMF), lexinfo builds on (the LMF section of) ISOcat. Unlike ISOcat, however, lexinfo is actively maintained and currently (May 2020) extended in a community effort.
Ontologies of Linguistic Annotation (OLiA) Similar in spirit to GOLD, the Ontologies of Linguistic Annotation (OLiA) provide a reference inventory of linguistic categories for syntactic, morphological and semantic phenomena relevant for
linguistic annotation and
linguistic corpora in the form of an
ontology. In addition, they also provide machine-readable annotation schemes for more than 100 languages, linked with the OLiA reference model. The OLiA ontologies represent a major hub of annotation terminology in the
(Linguistic) Linked Open Data cloud, with applications for search, retrieval and machine learning over heterogeneously annotated language resources. GOLD, CLARIN Concept Registry, Universal Dependencies, lexinfo, == References ==