Language acquisition Though there is still much debate, there are two main positions on children's language development: • the
behaviorist perspective, whereby all language must be learned by the child; and • the
innatist perspective, which believes that the abstract system of language cannot be learned, but that humans possess an innate language faculty or access to what has been called "
universal grammar". The innatist perspective began in 1959 with
Noam Chomsky's critical review of
B.F. Skinner's
Verbal Behavior (1957). This review helped start what has been called the
cognitive revolution in psychology. Chomsky posited that humans possess a special, innate ability for language, and that
complex syntactic features, such as
recursion, are "hard-wired" in the brain. These abilities are thought to be beyond the grasp of even the most intelligent and social non-humans. When Chomsky asserted that children acquiring a language have a vast search space to explore among all possible human grammars, there was no evidence that children received
sufficient input to learn all the rules of their language. Hence, there must be some other innate mechanism that endows humans with the ability to learn language. According to the "
innateness hypothesis", such a language faculty is what defines human language and makes that faculty different from even the most sophisticated forms of animal communication. The field of linguistics and psycholinguistics has since been defined by pro-and-con reactions to Chomsky. The view in favor of Chomsky still holds that the human ability to use language (specifically the ability to use recursion) is qualitatively different from any sort of animal ability. The view that language must be learned was especially popular before 1960 and is well represented by the
mentalistic theories of
Jean Piaget and the empiricist
Rudolf Carnap. Likewise, the behaviorist school of psychology puts forth the point of view that language is a behavior shaped by conditioned response; hence it is learned. The view that language can be learned has had a recent resurgence inspired by
emergentism. This view challenges the "innate" view as scientifically
unfalsifiable; that is to say, it cannot be tested. With the increase in computer technology since the 1980s, researchers have been able to simulate language acquisition using neural network models.
Language comprehension The structures and uses of language are related to the formation of ontological insights. Some see this system as "structured cooperation between language-users" who use conceptual and
semantic difference in order to exchange meaning and knowledge, as well as give meaning to language, thereby examining and describing "semantic processes bound by a 'stopping' constraint which are not cases of ordinary deferring." Deferring is normally done for a reason, and a rational person is always disposed to defer if there is good reason. The theory of the "semantic differential" supposes universal distinctions, such as: • Typicality: that included scales such as "regular–rare", "typical–exclusive"; • Reality: "imaginary–real", "evident–fantastic", "abstract–concrete"; • Complexity: "complex–simple", "unlimited–limited", "mysterious–usual"; • Improvement or Organization: "regular–spasmodic", "constant–changeable", "organized–disorganized", "precise–indefinite"; • Stimulation: "interesting–boring", "trivial–new".
Reading One question in the realm of language comprehension is how people understand sentences as they read (i.e.,
sentence processing). Experimental research has spawned several theories about the architecture and mechanisms of sentence comprehension. These theories are typically concerned with the types of information, contained in the sentence, that the reader can use to build meaning and the point at which that information becomes available to the reader. Issues such as "
modular" versus "interactive" processing have been theoretical divides in the field. A modular view of sentence processing assumes that the stages involved in reading a sentence function independently as separate modules. These modules have limited interaction with one another. For example, one influential theory of sentence processing, the "
garden-path theory", states that syntactic analysis takes place first. Under this theory, as the reader is reading a sentence, he or she creates the simplest structure possible, to minimize effort and cognitive load. This is done without any input from
semantic analysis or context-dependent information. Hence, in the sentence "The evidence examined by the lawyer turned out to be unreliable", by the time the reader gets to the word "examined" he or she has committed to a reading of the sentence in which the evidence is examining something because it is the simplest parsing. This commitment is made even though it results in an implausible situation: evidence cannot examine something. Under this "syntax first" theory, semantic information is processed at a later stage. It is only later that the reader will recognize that he or she needs to revise the initial parsing into one in which "the evidence" is being examined. In this example, readers typically recognize their mistake by the time they reach "by the lawyer" and must go back and reevaluate the sentence. This reanalysis is costly and contributes to slower reading times. A 2024 study found that during self-paced reading tasks, participants progressively read faster and recalled information more accurately, suggesting that task adaptation is driven by learning processes rather than by declining motivation. In contrast to the modular view, an interactive theory of sentence processing, such as a
constraint-based lexical approach assumes that all available information contained within a sentence can be processed at any time. Under an interactive view, the semantics of a sentence (such as plausibility) can come into play early on to help determine the structure of a sentence. Hence, in the sentence above, the reader would be able to make use of plausibility information in order to assume that "the evidence" is being examined instead of doing the examining. There are data to support both modular and interactive views; which view is correct is debatable. When reading,
saccades can cause the mind to skip over words because it does not see them as important to the sentence, and the mind completely omits it from the sentence or supplies the wrong word in its stead. This can be seen in "Paris in thethe Spring". This is a common psychological test, where the mind will often skip the second "the", especially when there is a line break in between the two.
Language production This focus concerns how people speak, sign, or write language. One of the most effective ways to see how people represent and retrieve the sounds and meanings of words and apply language rules is by collecting and analyzing
speech errors. Research on speech errors studies
tip-of-the-tongue states; fluency problems such as false starts, repetition, reformulation, and pauses between words or phrases; and
slips of the tongue. Slips can involve various kinds of linguistic segments. These can be as small as phonetic features (such as
voicing in the error "glear plue sky," where the voiced feature of the [b] sound in
blue exchanged places with the unvoiced feature of the [k] sound in
clear) or as big as whole phrases (like in the error "in one ear and gone tomorrow," where the
idioms in one ear and out the other and
here today and gone tomorrow were blended). Also, slips show a variety of changes of an intended utterance.
Spoonerisms, for example, are exchanges of parts of words (like in the error "shake a tower" for
take a shower). Speech errors can also show the substitution of one segment for another (like in the error "Don't burn your toes" when the speaker intended
fingers; "I got whipped cream on my mushroom" when the speaker intended
mustache). Anticipation or perseveration can also affect segments (e.g., "a case of ice" where the [s] sound is copied upstream, or "a cake of ike" where the [k] sound is copied downstream). Segments can also shift (like in the error "easy enoughly," where the
function morpheme -ly moved to a different word. Phenomena such as those summarized above have significant implications for understanding how language is produced. • Sentences are not fully planned before a speaker starts talking: Sound and word exchanges show that the planning window varies for different sized segments. Rather, their language faculty is constantly tapped during speech production. This is accounted for by the limitation of working memory. • The lexicon is organized semantically and phonologically: substitution errors show that lexicon is organized not only by its meaning, but also its form. • Morphologically complex words are assembled: errors involving blending within a word reflect that there seems to be a rule governing the construction of words in production (and also likely in mental lexicon). In other words, speakers generate the morphologically complex words by merging morphemes rather than retrieving them as chunks. It is useful to differentiate between three separate phases of language production: • conceptualization: "determining what to say"; • formulation: "translating the intention to say something into linguistic form"; • execution: "the detailed articulatory planning and articulation itself". Psycholinguistic research has largely concerned itself with the study of formulation because the conceptualization phase remains largely elusive and mysterious. • Language as a cognitive tool – Language serves as a scaffolding mechanism for cognitive processes, actively shaping mental representations in domains such as space, time, and color perception. A key refinement of linguistic relativity is Slobin’s (1996) "Thinking for Speaking" hypothesis, which argues that language influences cognition most strongly when individuals prepare to communicate. Unlike traditional views of linguistic relativity, which suggest that language passively shapes thought, "Thinking for Speaking" proposes that speakers actively engage with linguistic categories and structures while constructing utterances. From a psycholinguistic standpoint, research on linguistic relativity intersects with conceptual representations, perceptual learning, and cognitive flexibility. Experimental studies have tested these ideas by examining how speakers of different languages categorize the world differently. For instance, cross-linguistic comparisons in spatial cognition reveal that languages with absolute spatial frames (e.g., Guugu Yimithirr) encourage speakers to encode space differently than languages with relative spatial frames (e.g., English). Overall, linguistic relativity in psycholinguistics is no longer seen as a rigid determinism of thought by language, but rather as a gradual, experience-based modulation of cognition by linguistic structures. This perspective has led to a shift from a purely linguistic hypothesis to an integrative cognitive science framework incorporating evidence from experimental psychology, neuroscience, and computational modeling. ==Methodologies==