Sign languages have capability and complexity equal to spoken languages; their study as part of the field of
linguistics has demonstrated that they exhibit the fundamental properties that exist in all languages. Such fundamental properties include
duality of patterning and
recursion. Duality of patterning means that languages are composed of smaller, meaningless units which can be combined into larger units with meaning (see below). The term recursion means that languages exhibit grammatical rules and the output of such a rule can be the input of the same rule. It is, for example, possible in sign languages to create
subordinate clauses and a subordinate clause may contain another subordinate clause. Sign languages are not
mime—in other words, signs are conventional, often arbitrary and do not necessarily have a visual relationship to their referent, much as most spoken language is not
onomatopoeic. While
iconicity is more systematic and widespread in sign languages than in spoken ones, the difference is not categorical. The visual modality allows the human preference for close connections between form and meaning, to be more fully expressive, where as this is more suppressed in spoken language. from the Greek word for
hand, by analogy to the
phonemes, from Greek for
voice, of spoken languages. Now they are sometimes called phonemes when describing sign languages too, since the function is essentially the same, but more commonly discussed in terms of "features" More generally, both sign and spoken languages share the
characteristics that linguists have found in all natural human languages, such as transitoriness,
semanticity,
arbitrariness,
productivity, and
cultural transmission. Common linguistic features of many sign languages are the occurrence of
classifier constructions, a high degree of
inflection by means of changes of movement, and a
topic-comment syntax. More than spoken languages, sign languages can convey meaning by simultaneous means, e.g. by the use of
space, two manual articulators, and the signer's face and body. Though there is still much discussion on the topic of iconicity in sign languages, classifiers are generally considered to be highly iconic, as these complex constructions "function as predicates that may express any or all of the following: motion, position, stative-descriptive, or handling information". The term classifier is not used by everyone working on these constructions. Across the field of sign language linguistics the same constructions are also referred with other terms such as depictive signs. Today, linguists study sign languages as true languages, part of the field of linguistics. However, the category "sign languages" was not added to the
Linguistic Bibliography/Bibliographie Linguistique until the 1988 volume, when it appeared with 39 entries.
Relationships with spoken languages on Holečkova Street in
Prague-
Smíchov, by a school for the deaf There is a common misconception that sign languages are
spoken language expressed in signs, or that they were invented by hearing people. Similarities in
language processing in the brain between signed and spoken languages further perpetuated this misconception. Hearing teachers in deaf schools, such as Charles-Michel de l'Épée or Thomas Hopkins Gallaudet, are often incorrectly referred to as "inventors" of sign language. Instead, sign languages, like all natural languages, are developed by the people who use them, in this case, deaf people, who may have little or no knowledge of any spoken language. As a sign language develops, it sometimes borrows elements from spoken languages, just as all languages borrow from other languages that they are in contact with. Sign languages vary in how much they borrow from spoken languages. In many sign languages, a manual alphabet ("fingerspelling") may be used in signed communication to borrow a word from a spoken language. This is most commonly used for proper names of people and places; it is also used in some languages for concepts for which no sign is available at that moment, particularly if the people involved are to some extent bilingual in the spoken language. Fingerspelling can sometimes be a source of new signs, such as initialized signs, in which the handshape represents the first letter of a spoken word with the same meaning. video informing viewers of their new
COVID-19 regulations On the whole, though, sign languages are independent of spoken languages and follow their own paths of development. For example,
British Sign Language (BSL) and
American Sign Language (ASL) are quite different and mutually unintelligible, even though the hearing people of the United Kingdom and the United States share the same spoken language. The grammars of sign languages do not usually resemble those of spoken languages used in the same geographical area; in fact, in terms of syntax, ASL shares more with spoken
Japanese than it does with English. Similarly, countries which use a single spoken language throughout may have two or more sign languages, or an area that contains more than one spoken language might use only one sign language.
South Africa, which has 11 official spoken languages and a similar number of other widely used spoken languages, is a good example of this. It has only one sign language with two variants due to its history of having two major educational institutions for the deaf which have served different geographic areas of the country.
Spatial grammar and simultaneity Sign languages exploit the unique features of the visual medium (sight), but may also exploit tactile features (
tactile sign languages). Spoken language is by and large linear; only one sound can be made or received at a time. Sign language, on the other hand, is visual and, hence, can use a simultaneous expression, although this is limited articulatorily and linguistically. Visual perception allows processing of simultaneous information. One way in which many sign languages take advantage of the spatial nature of the language is through the use of classifiers.
Classifiers allow a signer to spatially show a referent's type, size, shape, movement, or extent. The possible simultaneity of sign languages in contrast to spoken languages is sometimes exaggerated. The use of two manual articulators is subject to motor constraints, resulting in a large extent of symmetry or signing with one articulator only. Further, sign languages, just like spoken languages, depend on linear sequencing of signs to form sentences; the greater use of simultaneity is mostly seen in the
morphology (internal structure of individual signs).
Non-manual elements Sign languages convey much of their
prosody through non-manual elements. Postures or movements of the body, head, eyebrows, eyes, cheeks, and mouth are used in various combinations to show several categories of information, including
lexical distinction,
grammatical structure,
adjectival or
adverbial content, and
discourse functions. At the lexical level, signs can be lexically specified for non-manual elements in addition to the manual articulation. For instance, facial expressions may accompany verbs of emotion, as in the sign for
angry in
Czech Sign Language. Non-manual elements may also be lexically contrastive. For example, in ASL (American Sign Language), facial components distinguish some signs from other signs. An example is the sign translated as
not yet, which requires that the tongue touch the lower lip and that the head rotate from side to side, in addition to the manual part of the sign. Without these features the sign would be interpreted as
late.
Mouthings, which are (parts of) spoken words accompanying lexical signs, can also be contrastive, as in the manually identical signs for
doctor and
battery in
Sign Language of the Netherlands. While the content of a signed sentence is produced manually, many grammatical functions are produced non-manually (i.e., with the face and the torso). Such functions include questions, negation, relative clauses and topicalization. ASL and BSL use similar non-manual marking for yes/no questions, for example. They are shown through raised eyebrows and a forward head tilt. Some adjectival and adverbial information is conveyed through non-manual elements, but what these elements are varies from language to language. For instance, in ASL a slightly open mouth with the tongue relaxed and visible in the corner of the mouth means "carelessly", but a similar non-manual in BSL means "boring" or "unpleasant".
Iconicity Iconicity is similarity or analogy between the form of a sign (linguistic or otherwise) and its meaning, as opposed to
arbitrariness. The first studies on iconicity in ASL were published in the late 1970s and early 1980s. Many early sign language linguists rejected the notion that iconicity was an important aspect of sign languages, considering most perceived iconicity to be extralinguistic. Though it never disappears from a particular sign language, iconicity is gradually weakened as forms of sign languages become more customary and are subsequently grammaticized. As a form becomes more conventional, it becomes disseminated in a methodical way phonologically to the rest of the sign language community.
Nancy Frishberg concluded that though originally present in many signs, iconicity is degraded over time through the application of natural grammatical processes. In his study, Brown found that when a group of six hearing children were taught signs that had high levels of iconic mapping they were significantly more likely to recall the signs in a later memory task than another group of six children that were taught signs that had little or no iconic properties. In contrast to Brown, linguists
Elissa Newport and Richard Meier found that iconicity "appears to have virtually no impact on the acquisition of American Sign Language". A central task for the pioneers of sign language linguistics was trying to prove that ASL was a real language and not merely a collection of gestures or "English on the hands". One of the prevailing beliefs at this time was that "real languages" must consist of an arbitrary relationship between form and meaning. The visual nature of sign language simply allows for a greater degree of iconicity compared to spoken languages as most real-world objects can be described by a prototypical shape (e.g., a table usually has a flat surface), but most real-world objects do not make prototypical sounds that can be mimicked by spoken languages (e.g., tables do not make prototypical sounds). However, sign languages are not fully iconic. On the one hand, there are also many arbitrary signs in sign languages and, on the other hand, the grammar of a sign language puts limits to the degree of iconicity: All known sign languages, for example, express lexical concepts via manual signs. From a truly iconic language one would expect that a concept like smiling would be expressed by mimicking a smile (i.e., by performing a smiling face). All known sign languages, however, do not express the concept of smiling by a smiling face, but by a manual sign. The
cognitive linguistics perspective rejects a more traditional definition of iconicity as a relationship between linguistic form and a concrete, real-world referent. Rather it is a set of selected correspondences between the form and meaning of a sign. In this view, iconicity is grounded in a language user's mental representation ("
construal" in
cognitive grammar). It is defined as a fully grammatical and central aspect of a sign language rather than a peripheral phenomenon. The most famous of these is probably the extinct
Martha's Vineyard Sign Language of the U.S., but there are also numerous village languages scattered throughout Africa, Asia, and America.
Deaf-community sign languages, on the other hand, arise where deaf people come together to form their own communities. These include school sign, such as
Nicaraguan Sign Language, which develop in the student bodies of deaf schools which do not use sign as a language of instruction, as well as community languages such as
Bamako Sign Language, which arise where generally uneducated deaf people congregate in urban centers for employment. At first, Deaf-community sign languages are not generally known by the hearing population, in many cases not even by close family members. However, they may grow, in some cases becoming a language of instruction and receiving official recognition, as in the case of ASL. Both contrast with
speech-taboo languages such as the various
Aboriginal Australian sign languages, which are developed by the hearing community and only used secondarily by the deaf. It is doubtful whether most of these are languages in their own right, rather than manual codes of spoken languages, though a few such as
Yolngu Sign Language are independent of any particular spoken language. Hearing people may also develop sign to communicate with users of other languages, as in
Plains Indian Sign Language; this was a contact signing system or
pidgin that was evidently not used by deaf people in the Plains nations, though it presumably influenced home sign.
Language contact and creolization is common in the development of sign languages, making clear family classifications difficult– it is often unclear whether lexical similarity is due to borrowing or a common parent language, or whether there was one or several parent languages, such as several village languages merging into a Deaf-community language. Contact occurs between sign languages, between sign and spoken languages (
contact sign, a kind of pidgin), and between sign languages and
gestural systems used by the broader community. For example,
Adamorobe Sign Language, a village sign language of Ghana, may be related to the "gestural trade jargon used in the markets throughout West Africa", in vocabulary and
areal features including prosody and phonetics. from Suliphone, a deaf artist. This was one of several activities at a school book party sponsored by Big Brother Mouse, a literacy project in
Laos where Suliphone works. •
BSL,
Auslan and
NZSL are usually loosely considered to be part of the language group known as
BANZSL.
Maritime Sign Language and
South African Sign Language are also related to BSL. •
Danish Sign Language and its descendants
Norwegian Sign Language and
Icelandic Sign Language are largely mutually intelligible with
Swedish Sign Language.
Finnish Sign Language and
Portuguese Sign Language derive from Swedish SL, though with local admixture in the case of mutually unintelligible Finnish SL. Danish SL has French SL influence and Wittmann (1991) places them in that family, although some reports also say that the
São Tomé and Príncipe Sign Language is largely intelligible with Portuguese Sign. •
Indian Sign Language (ISL) is similar to
Pakistani Sign Language. •
Japanese Sign Language,
Taiwanese Sign Language and
Korean Sign Language are thought to be members of a
Japanese Sign Language family. •
French Sign Language family. There are a number of sign languages that emerged from
French Sign Language (LSF), or are the result of language contact between local community sign languages and LSF. These include:
French Sign Language,
Italian Sign Language,
Quebec Sign Language (LSQ),
American Sign Language,
Irish Sign Language,
Russian Sign Language,
Dutch Sign Language (NGT),
Spanish Sign Language,
Mexican Sign Language,
Brazilian Sign Language (LIBRAS),
Catalan Sign Language,
Ukrainian Sign Language,
Austrian Sign Language (along with its twin
Hungarian Sign Language and its offspring
Czech Sign Language) and others. • A subset of this group includes languages that have been heavily influenced by American Sign Language (ASL), or are regional varieties of ASL.
Bolivian Sign Language is sometimes considered a dialect of ASL.
Thai Sign Language is a
mixed language derived from ASL and the native sign languages of Bangkok and Chiang Mai, and may be considered part of the ASL family. Others possibly influenced by ASL include
Ugandan Sign Language,
Kenyan Sign Language,
Philippine Sign Language and
Malaysian Sign Language. • According to an SIL report, the sign languages of Russia, Moldova and Ukraine share a high degree of lexical similarity and may be dialects of one language, or distinct related languages. The same report suggested a "cluster" of sign languages centered around
Czech Sign Language,
Hungarian Sign Language and
Slovak Sign Language. This group may also include
Romanian,
Bulgarian, and
Polish sign languages. •
German Sign Language (DGS) gave rise to
Polish Sign Language; it also at least strongly influenced
Israeli Sign Language, though it is unclear whether the latter derives from DGS or from
Austrian Sign Language, which is in the French family. • The southern dialect of
Chinese Sign Language gave rise to
Hong Kong Sign Language, used in Hong Kong and Macau •
Lyons Sign Language may be the source of
Flemish Sign Language (VGT) though this is unclear. • Sign languages of
Jordan, Lebanon, Syria, Palestine, and Iraq (and possibly
Saudi Arabia) may be part of a
sprachbund, or may be one dialect of a larger
Eastern Arabic Sign Language. • Known
isolates include
Nicaraguan Sign Language,
Turkish Sign Language,
Armenian Sign Language,
Kata Kolok,
Al-Sayyid Bedouin Sign Language and
Providence Island Sign Language. The only comprehensive classification along these lines going beyond a simple listing of languages dates back to 1991. The classification is based on the 69 sign languages from the 1988 edition of
Ethnologue that were known at the time of the 1989 conference on sign languages in Montreal and 11 more languages the author added after the conference. In his classification, the author distinguishes between primary and auxiliary sign languages as well as between single languages and names that are thought to refer to more than one language. The prototype-A class of languages includes all those sign languages that seemingly cannot be derived from any other language. Prototype-R languages are languages that are remotely modelled on a prototype-A language (in many cases thought to have been French Sign Language) by a process Kroeber (1940) called "
stimulus diffusion". The families of
BSL,
DGS,
JSL,
LSF (and possibly
LSG) were the products of
creolization and
relexification of prototype languages. Creolization is seen as enriching overt morphology in sign languages, as compared to reducing overt morphology in spoken languages.
Typology Sign languages vary in word-order typology. For example, Austrian Sign Language, Japanese Sign Language and
Indo-Pakistani Sign Language are
Subject-object-verb while ASL is
Subject-verb-object. Influence from the surrounding spoken languages is plausible. Sign languages tend to be incorporating classifier languages, where a classifier handshape representing the object is incorporated into those transitive verbs which allow such modification. For a similar group of intransitive verbs (especially motion verbs), it is the subject which is incorporated. Only in a very few sign languages (for instance Japanese Sign Language) are agents ever incorporated. In this way, since subjects of intransitives are treated similarly to objects of transitives, incorporation in sign languages can be said to follow an ergative pattern. Brentari classifies sign languages as a whole group determined by the medium of communication (visual instead of auditory) as one group with the features monosyllabic and polymorphemic. That means, that one syllable (i.e. one word, one sign) can express several morphemes, e.g., subject and object of a verb determine the direction of the verb's movement (inflection). Another aspect of typology that has been studied in sign languages is their systems for
cardinal numbers. Typologically significant differences have been found between sign languages.
Acquisition Children who are exposed to a sign language from birth will acquire it, just as hearing children acquire their native spoken language. Researchers at the
McGill University found that American Sign Language users who acquired the language natively (from birth) performed better when asked to copy videos of ASL sentences than ASL users who acquired the language later in life. They also found that there are differences in the grammatical morphology of ASL sentences between the two groups, all suggesting that there is an important critical period in learning signed languages. The acquisition of non-manual features follows an interesting pattern: When a word that always has a particular non-manual feature associated with it (such as a
wh-question word) is learned, the non-manual aspects are attached to the word but do not have the flexibility associated with adult use. At a certain point, the non-manual features are dropped and the word is produced with no facial expression. After a few months, the non-manuals reappear, this time being used the way adult signers would use them.
Written forms Sign languages do not have a traditional or formal written form. Many deaf people do not see a need to write their own language. Several ways to represent sign languages in written form have been developed. •
Stokoe notation, devised by Dr.
William Stokoe for his 1965
Dictionary of American Sign Language, is an abstract
phonemic notation system. Designed specifically for representing the use of the hands, it has no way of expressing facial expression or other non-manual features of sign languages. However, it was designed for research, particularly in a dictionary, not for general use. • The
Hamburg Notation System (HamNoSys), developed in the early 1990s, is a detailed phonetic system, not designed for any one sign language, and intended as a transcription system for researchers rather than as a practical script. •
David J. Peterson has attempted to create a phonetic transcription system for signing that is
ASCII-friendly known as the Sign Language International Phonetic Alphabet (SLIPA). •
SignWriting, developed by Valerie Sutton in 1974, is a system for representing sign languages phonetically (including
mouthing, facial expression and dynamics of movement). The script is sometimes used for detailed research, language documentation, as well as publishing texts and works in sign languages. •
si5s is another orthography which is largely phonemic. However, a few signs are
logographs and/or
ideographs due to regional variation in sign languages. •
ASL-phabet is a system designed primarily for education of deaf children by Dr.
Sam Supalla which uses a minimalist collection of symbols in the order of Handshape-Location-Movement. Many signs can be written the same way (
homograph). • The Alphabetic Writing System for sign languages (, SEA, by its Spanish name and acronym), developed by linguist Ángel Herrero Blanco and two deaf researchers, Juan José Alfaro and Inmacualada Cascales, was published as a book in 2003 and made accessible in
Spanish Sign Language on-line. This system makes use of the letters of the Latin alphabet with a few diacritics to represent sign through the morphemic sequence S L C Q D F (bimanual sign, place, contact, handshape, direction and internal form). The resulting words are meant to be read by signing. The system is designed to be applicable to any sign language with minimal modification and to be usable through any medium without special equipment or software. Non-manual elements can be encoded to some extent, but the authors argue that the system does not need to represent all elements of a sign to be practical, the same way written oral language does not. The system has seen some updates which are kept publicly on a wiki page. The Center for Linguistic Normalization of Spanish Sign Language has made use of SEA to transcribe all signs on its dictionary. So far, there is no consensus regarding the written form of sign language. Maria Galea writes that SignWriting "is becoming widespread, uncontainable and untraceable. In the same way that works written in and about a well developed writing system such as the Latin script, the time has arrived where SW is so widespread, that it is impossible in the same way to list all works that have been produced using this writing system and that have been written about this writing system." For example, in 2015 at the
Federal University of Santa Catarina, João Paulo Ampessan wrote his linguistics master's dissertation in Brazilian Sign Language using Sutton SignWriting. In his dissertation, "The Writing of Grammatical Non-Manual Expressions in Sentences in LIBRAS Using the SignWriting System", Ampessan states that "the data indicate the need for [non-manual expressions] usage in writing sign language". Except for SignWriting, none are widely used.
Sign perception For a native signer, sign
perception influences how the mind makes sense of their visual language experience. For example, a handshape may vary based on the other signs made before or after it, but these variations are arranged in perceptual categories during its development. The mind detects handshape contrasts but groups similar handshapes together in one category. Different handshapes are stored in other categories. The mind ignores some of the similarities between different perceptual categories, at the same time preserving the visual information within each perceptual category of handshape variation. == In society ==