Shallow Semantic Parsing Shallow semantic parsing is concerned with identifying entities in an utterance and labelling them with the roles they play. Shallow semantic parsing is sometimes known as slot-filling or frame semantic parsing, since its theoretical basis comes from
frame semantics, wherein a word evokes a frame of related concepts and roles. Slot-filling systems are widely used in
virtual assistants in conjunction with intent classifiers, which can be seen as mechanisms for identifying the frame evoked by an utterance. Popular architectures for slot-filling are largely variants of an encoder-decoder model, wherein two
recurrent neural networks (RNNs) are trained jointly to encode an utterance into a vector and to decode that vector into a sequence of slot labels. This type of model is used in the
Amazon Alexa spoken language understanding system. Shallow semantic parsers can parse utterances like "show me flights from Boston to Dallas" by classifying the intent as "list flights", and filling slots "source" and "destination" with "Boston" and "Dallas", respectively. However, shallow semantic parsing cannot parse arbitrary compositional utterances, like "show me flights from Boston to anywhere that has flights to Juneau". Deep semantic parsing attempts to parse such utterances, typically by converting them to a formal meaning representation language. Nowadays, compositional semantic parsing are using
Large Language Models to solve artificial compositional generalization tasks such as SCAN.
Neural Semantic Parsing Semantic parsers play a crucial role in natural language understanding systems because they transform natural language utterances into machine-executable logical structures or programmes. A well-established field of study, semantic parsing finds use in voice assistants, question answering, instruction following, and code generation. Since Neural approaches have been available for two years, many of the presumptions that underpinned semantic parsing have been rethought, leading to a substantial change in the models employed for semantic parsing. Though
Semantic neural network and
Neural Semantic Parsing both deal with
Natural Language Processing (NLP) and semantics, they are not same. The models and executable formalisms used in semantic parsing research have traditionally been strongly dependent on concepts from formal semantics in linguistics, like the λ-calculus produced by a CCG parser. Nonetheless, more approachable formalisms, like conventional programming languages, and NMT-style models that are considerably more accessible to a wider NLP audience, are made possible by recent work with neural encoder-decoder semantic parsers. We'll give a summary of contemporary neural approaches to semantic parsing and discuss how they've affected the field's understanding of semantic parsing.
Representation languages Early semantic parsers used highly domain-specific meaning representation languages, with later systems using more extensible languages like
Prolog,
lambda calculus, lambda dependency-based compositional semantics (λ-DCS),
SQL,
Python,
Java, the Alexa Meaning Representation Language, semantic graphs, or vector representations.
Models Most modern deep semantic parsing models are either based on defining a
formal grammar for a
chart parser or utilizing RNNs to directly translate from a natural language to a meaning representation language. Examples of systems built on formal grammars are the Cornell Semantic Parsing Framework,
Stanford University's Semantic Parsing with Execution (SEMPRE), == Datasets ==