MarketFace perception
Company Profile

Face perception

Facial perception is an individual's understanding and interpretation of the face. Here, perception implies the presence of consciousness and hence excludes automated facial recognition systems. Although facial recognition is found in other species, this article focuses on facial perception in humans.

Overview
Theories about the processes involved in adult face perception have largely come from two sources; research on normal adult face perception and the study of impairments in face perception that are caused by brain injury or neurological illness. Bruce & Young model One of the most widely accepted theories of face perception argues that understanding faces involves several stages: from basic perceptual manipulations on the sensory information to derive details about the person (such as age, gender or attractiveness), to being able to recall meaningful details such as their name and any relevant past experiences of the individual. This model, developed by Vicki Bruce and Andy Young in 1986, argues that face perception involves independent sub-processes working in unison. • A "view centered description" is derived from the perceptual input. Simple physical aspects of the face are used to work out age, gender or basic facial expressions. Most analysis at this stage is on feature-by-feature basis. • This initial information is used to create a structural model of the face, which allows it to be compared to other faces in memory. This explains why the same person from a novel angle can still be recognized (see Thatcher effect). • The structurally encoded representation is transferred to theoretical "face recognition units" that are used with "personal identity nodes" to identify a person through information from semantic memory. Interestingly, the ability to produce someone's name when presented with their face has been shown to be selectively damaged in some cases of brain injury, suggesting that naming may be a separate process from being able to produce other information about a person. Traumatic brain injury and neurological illness Following brain damage, faces can appear severely distorted. A wide variety of distortions can occur — features can droop, enlarge, become discolored, or the entire face can appear to shift relative to the head. This condition is known as prosopometamorphopsia (PMO). In half of the reported cases, distortions are restricted to either the left or the right side of the face, and this form of PMO is called hemi-prosopometamorphopsia (hemi-PMO). Hemi-PMO often results from lesions to the splenium, which connects the right and left hemisphere. In the other half of reported cases, features on both sides of the face appear distorted. Perceiving facial expressions can involve many areas of the brain, and damaging certain parts of the brain can cause specific impairments in one's ability to perceive a face. As stated earlier, research on the impairments caused by brain injury or neurological illness has helped develop our understanding of cognitive processes. The study of prosopagnosia (an impairment in recognizing faces that is usually caused by brain injury) has been particularly helpful in understanding how normal face perception might work. Individuals with prosopagnosia may differ in their abilities to understand faces, and it has been the investigation of these differences which has suggested that several stage theories might be correct. Brain imaging studies typically show a great deal of activity in an area of the temporal lobe known as the fusiform gyrus, an area also known to cause prosopagnosia when damaged (particularly when damage occurs on both sides). This evidence has led to a particular interest in this area and it is sometimes referred to as the fusiform face area (FFA) for that reason. It is important to note that while certain areas of the brain respond selectively to faces, facial processing involves many neural networks which include visual and emotional processing systems. For example, prosopagnosia patients demonstrate neuropsychological support for a specialized face perception mechanism as these people (due to brain damage) have deficits in facial perception, but their cognitive perception of objects remains intact. The face inversion effect provides behavioral support of a specialized mechanism as people tend to have greater deficits in task performance when prompted to react to an inverted face than to an inverted object. Electrophysiological support comes from the finding that the N170 and M170 responses tend to be face-specific. Neuro-imaging studies, such as those with PET and fMRI, have shown support for a specialized facial processing mechanism, as they have identified regions of the fusiform gyrus that have higher activation during face perception tasks than other visual perception tasks. ==Early development==
Early development
Despite numerous studies, there is no widely accepted time-frame in which the average human develops the ability to perceive faces. Ability to discern faces from other objects Many studies have found that infants will give preferential attention to faces in their visual field, indicating they can discern faces from other objects. • While newborns will often show particular interest in faces at around three months of age, that preference slowly disappears, re-emerges late during the first year, and slowly declines once more over the next two years of life. • While newborns show a preference to faces as they grow older (specifically between one and four months of age) this interest can be inconsistent. • Infants turning their heads towards faces or face-like images suggest rudimentary facial processing capacities. • The re-emergence of interest in faces at three months is likely influenced by a child's motor abilities. Ability to detect emotion in the face At around seven months of age, infants show the ability to discern faces by emotion. However, whether they have fully developed emotion recognition is unclear. Discerning visual differences in facial expressions is different to understanding the valence of a particular emotion. • Seven-month-olds seem capable of associating emotional prosodies with facial expressions. When presented with a happy or angry face, followed by an emotionally neutral word read in a happy or angry tone, their event-related potentials follow different patterns. Happy faces followed by angry vocal tones produce more changes than the other incongruous pairing, while there was no such difference between happy and angry congruous pairings. The greater reaction implies that infants held greater expectations of a happy vocal tone after seeing a happy face than an angry tone following an angry face. • By the age of seven months, children are able to recognize an angry or fearful facial expression, perhaps because of the threat-salient nature of the emotion. Despite this ability, newborns are not yet aware of the emotional content encoded within facial expressions. • Infants can comprehend facial expressions as social cues representing the feelings of other people before they are a year old. Seven-month-old infants show greater negative central components to angry faces that are looking directly at them than elsewhere, although the gaze of fearful faces produces no difference. In addition, two event-related potentials in the posterior part of the brain are differently aroused by the two negative expressions tested. These results indicate that infants at this age can partially understand the higher level of threat from anger directed at them. • Five-month-olds, when presented with an image of a fearful expression and a happy expression, exhibit similar event-related potentials for both. However, when seven-month-olds are given the same treatment, they focus more on the fearful face. This result indicates increased cognitive focus toward fear that reflects the threat-salient nature of the emotion. Seven-month-olds regard happy and sad faces as distinct emotive categories. • By seven months, infants are able to use facial expressions to understand others' behavior. Seven-month-olds look to use facial cues to understand the motives of other people in ambiguous situations, as shown in a study where infants watched the experimenter's face longer if the experimenter took a toy from them and maintained a neutral expression, as opposed to if the experimenter made a happy expression. When infants are exposed to faces, it varies depending on factors including facial expression and eye gaze direction. • While seven-month-olds have been found to focus more on fearful faces, a study found that "happy expressions elicit enhanced sympathetic arousal in infants" both when facial expressions were presented subliminally and in a way that the infants were consciously aware of the stimulus. Conscious awareness of a stimulus is not connected to an infant's reaction. • The neural substrates of face perception in infants are similar to those of adults, but the limits of child-safe imaging technology currently obscure specific information from subcortical areas like the amygdala, which is active in adult facial perception. They also showed activity near the fusiform gyrus. • Infants can discern between macaque faces at six months of age, but, without continued exposure, cannot do so at nine months of age. If they were shown photographs of macaques during this three-month period, they were more likely to retain this ability. • Faces "convey a wealth of information that we use to guide our social interactions". The neurological mechanisms responsible for face recognition are present by age five. Children's processing of faces is similar to that of adults, but adults process faces more efficiently. This may be because of advancements in memory and cognitive functioning. • However, the idea that infants younger than two could mimic facial expressions was disputed by Susan S. Jones, who believed that infants are unaware of the emotional content encoded within facial expressions, and also found they are not able to imitate facial expressions until their second year of life. She also found that mimicry emerged at different ages. ==Neuroanatomy==
Neuroanatomy
Key areas of the brain Facial perception has neuroanatomical correlates in the brain. The fusiform face area (BA37— Brodmann area 37) is located in the lateral fusiform gyrus. It is thought that this area is involved in holistic processing of faces and it is sensitive to the presence of facial parts as well as the configuration of these parts. The fusiform face area is also necessary for successful face detection and identification. This is supported by fMRI activation and studies on prosopagnosia, which involves lesions in the fusiform face area. The occipital face area is located in the inferior occipital gyrus. The superior temporal sulcus has demonstrated increased activation when attending to gaze direction. During face perception, major activations occur in the extrastriate areas bilaterally, particularly in the above three areas. Bilateral activation is generally shown in all of these specialized facial areas. However, some studies show increased activation in one side over the other: for instance, the right fusiform gyrus is more important for facial processing in complex situations. One study used BOLD fMRI mapping to identify activation in the brain when subjects viewed both cars and faces. They found that the occipital face area, the fusiform face area, the superior temporal sulcus, the amygdala, and the anterior/inferior cortex of the temporal lobe all played roles in contrasting faces from cars, with initial face perception beginning in the fusiform face area and occipital face areas. This entire region forms a network that acts to distinguish faces. The processing of faces in the brain is known as a "sum of parts" perception. However, the individual parts of the face must be processed first in order to put all of the pieces together. In early processing, the occipital face area contributes to face perception by recognizing the eyes, nose, and mouth as individual pieces. Researchers also used BOLD fMRI mapping to determine the patterns of activation in the brain when parts of the face were presented in combination and when they were presented singly. The occipital face area is activated by the visual perception of single features of the face, for example, the nose and mouth, and preferred combination of two-eyes over other combinations. This suggests that the occipital face area recognizes the parts of the face at the early stages of recognition. On the contrary, the fusiform face area shows no preference for single features, because the fusiform face area is responsible for "holistic/configural" information, While certain areas respond selectively to faces, facial processing involves many neural networks, including visual and emotional processing systems. While looking at faces displaying emotions (especially those with fear facial expressions) compared to neutral faces there is increased activity in the right fusiform gyrus. This increased activity also correlates with increased amygdala activity in the same situations. The emotional processing effects observed in the fusiform gyrus are decreased in patients with amygdala lesions. The object form topology hypothesis posits a topological organization of neural substrates for object and facial processing. However, there is disagreement: the category-specific and process-map models could accommodate most other proposed models for the neural underpinnings of facial processing. Most neuroanatomical substrates for facial processing are perfused by the middle cerebral artery. Therefore, facial processing has been studied using measurements of mean cerebral blood flow velocity in the middle cerebral arteries bilaterally. During facial recognition tasks, greater changes occur in the right middle cerebral artery than the left. Men are right-lateralized and women left-lateralized during facial processing tasks. Just as memory and cognitive function separate the abilities of children and adults to recognize faces, the familiarity of a face may also play a role in the perception of faces. showed that familiar faces are indicated by a stronger N250, Similarly, all faces elicit the N170 response in the brain. The brain conceptually needs only ~50 neurons to encode any human face, with facial features projected on individual axes (neurons) in a 50-dimensional "Face Space". ==Cognitive neuroscience==
Cognitive neuroscience
Cognitive neuroscientists Isabel Gauthier and Michael Tarr are two of the major proponents of the view that face recognition involves expert discrimination of similar objects. Other scientists, in particular Nancy Kanwisher and her colleagues, argue that face recognition involves processes that are face-specific and that are not recruited by expert discriminations in other object classes. Studies by Gauthier have shown that an area of the brain known as the fusiform gyrus (sometimes called the fusiform face area because it is active during face recognition) is also active when study participants are asked to discriminate between different types of birds and cars, and even when participants become expert at distinguishing computer generated nonsense shapes known as greebles. This suggests that the fusiform gyrus have a general role in the recognition of similar visual objects. The activity found by Gauthier when participants viewed non-face objects was not as strong as when participants were viewing faces, however this could be because we have much more expertise for faces than for most other objects. Furthermore, not all findings of this research have been successfully replicated, for example, other research groups using different study designs have found that the fusiform gyrus is specific to faces and other nearby regions deal with non-face objects. However, these findings are difficult to interpret: failures to replicate are null effects and can occur for many different reasons. In contrast, each replication adds a great deal of weight to a particular argument. There are now multiple replications with greebles, with birds and cars, and two unpublished studies with chess experts. Although expertise sometimes recruits the fusiform face area, a more common finding is that expertise leads to focal category-selectivity in the fusiform gyrus—a pattern similar in terms of antecedent factors and neural specificity to that seen for faces. As such, it remains an open question as to whether face recognition and expert-level object recognition recruit similar neural mechanisms across different subregions of the fusiform or whether the two domains literally share the same neural substrates. At least one study argues that the issue is nonsensical, as multiple measurements of the fusiform face area within an individual often overlap no more with each other than measurements of fusiform face area and expertise-predicated regions. fMRI studies have asked whether expertise has any specific connection to the fusiform face area in particular, by testing for expertise effects in both the fusiform face area and a nearby but not face-selective region called LOC (Rhodes et al., JOCN 2004; Op de Beeck et al., JN 2006; Moore et al., JN 2006; Yue et al. VR 2006). In all studies, expertise effects are significantly stronger in the LOC than in the fusiform face area, and indeed expertise effects were only borderline significant in the fusiform face area in two of the studies, while the effects were robust and significant in the LOC in all studies. Therefore, it is still not clear in exactly which situations the fusiform gyrus becomes active, although it is certain that face recognition relies heavily on this area and damage to it can lead to severe face recognition impairment. == Face advantage in memory recall ==
Face advantage in memory recall
During face perception, neural networks make connections with the brain to recall memories. According to the Seminal Model of face perception, there are three stages of face processing: While the face is a powerful identifier, the voice also helps in recognition. Research has tested if faces or voices make it easier to identify individuals and recall semantic memory and episodic memory. The participants were first asked if the stimulus was familiar. If they answered yes then they were asked for information (semantic memory) and memories (episodic memory) that fit the face or voice presented. These experiments demonstrated the phenomenon of face advantage and how it persists through follow-up studies. In other words, when voices were recognized (about 60–70% of the time) they were much harder to recall biographical information but very good at being recognized. This discrepancy is due to a larger amount of guesswork and false alarms that occur with voices. Even after controlling the voice samples as well as the face samples (using blurred faces), studies have shown that semantic information can be more accessible to retrieve when individuals are recognizing faces than voices. Another technique to control the content of the speech extracts is to present the faces and voices of personally familiar individuals, like the participant's teachers or neighbors, instead of the faces and voices of celebrities. To overcome this problem researchers decided to use personally familiar individuals as stimuli instead of celebrities. Studies that used this type of control also demonstrated the face advantage. In a current experiment that used this paradigm, a name and a profession were given together with, accordingly, a voice, a face or both to three participant groups. Again, the results showed that semantic information can be more accessible to retrieve when individuals are recognizing faces than voices even when the frequency of exposure was controlled. In recognition of faces as it pertains to episodic memory, there has been shown to be activation in the left lateral prefrontal cortex, parietal lobe, and the left medial frontal/anterior cingulate cortex. It was also found that a left lateralization during episodic memory retrieval in the parietal cortex correlated strongly with success in retrieval. There is also evidence of the existence of two separate neural systems for face recognition: one for familiar faces and another for newly learned faces. ==Self-face perception==
Self-face perception
Though many animals have face-perception capabilities, the recognition of self-face has been observed to be unique to only a few species. There is a particular interest in the study of self-face perception because of its relation to the perceptual integration process. One study found that the perception/recognition of one's own face was unaffected by changing contexts, while the perception/recognition of familiar and unfamiliar faces was adversely affected. In 2014, Motoaki Sugiura developed a conceptual model for self-recognition by breaking it into three categories: the physical, interpersonal, and social selves. Mirror test Gordon Gallup Jr. developed a technique in 1970 as an attempt to measure self-awareness. This technique is commonly referred to has the mirror test. The method involves placing a marker on the subject in a place they can not see without a mirror (e.g. ones forehead). The marker must be placed inconspicuously enough that the subject does not become aware that they have been marked. Once the marker is placed, the subject is given access to a mirror. If the subject investigates the mark (e.g. tries to wipe the mark off), this would indicate that the subject understands they are looking at a reflection of themselves, as opposed to perceiving the mirror as an extension of their environment. (e.g., thinking the reflection is another person/animal behind a window) Though this method is regarded as one of the more effective techniques when it comes to measuring self-awareness, it is certainly not perfect. There are many factors at play that could have an effect on the outcome. For example, if an animal is biologically blind, like a mole, we can not assume that they inherently lack self awareness. It can only be assumed that visual self-recognition, is possibly one of many ways for a living being to be considered as cognitively "self aware". ==Gender==
Gender
Studies using electrophysiological techniques have demonstrated gender-related differences during a face recognition memory task and a facial affect identification task. In facial perception there was no association to estimated intelligence, suggesting that face recognition in women is unrelated to several basic cognitive processes. Gendered differences may suggest a role for sex hormones. In females there may be variability for psychological functions related to differences in hormonal levels during different phases of the menstrual cycle. Data obtained in norm and in pathology support asymmetric face processing. The left inferior frontal cortex and the bilateral occipitotemporal junction may respond equally to all face conditions. The right inferior temporal/fusiform gyrus responds selectively to faces but not to non-faces. The right temporal pole is activated during the discrimination of familiar faces and scenes from unfamiliar ones. Right asymmetry in the mid-temporal lobe for faces has also been shown using 133-Xenon measured cerebral blood flow. Other investigators have observed right lateralization for facial recognition in previous electrophysiological and imaging studies. Asymmetric facial perception implies implementing different hemispheric strategies. The right hemisphere would employ a holistic strategy, and the left an analytic strategy. A 2007 study, using functional transcranial Doppler spectroscopy, demonstrated that men were right-lateralized for object and facial perception, while women were left-lateralized for facial tasks but showed a right-tendency or no lateralization for object perception. This could be taken as evidence for topological organization of these cortical areas in men. It may suggest that the latter extends from the area implicated in object perception to a much greater area involved in facial perception. This agrees with the object form topology hypothesis proposed by Ishai. However, the relatedness of object and facial perception was process-based, and appears to be associated with their common holistic processing strategy in the right hemisphere. Moreover, when the same men were presented with facial paradigm requiring analytic processing, the left hemisphere was activated. This agrees with the suggestion made by Gauthier in 2000, that the extrastriate cortex contains areas that are best suited for different computations, and described as the process-map model. Therefore, the proposed models are not mutually exclusive: facial processing imposes no new constraints on the brain besides those used for other stimuli. Each stimulus may have been mapped by category into face or non-face, and by process into holistic or analytic. Therefore, a unified category-specific process-mapping system was implemented for either right or left cognitive styles. For facial perception, men likely use a category-specific process-mapping system for right cognitive style, and women use the same for the left. ==Ethnicity==
Ethnicity
Differences in own- versus other-race face recognition and perceptual discrimination was first researched in 1914. Humans tend to perceive people of other races than their own to all look alike: This phenomenon, known as the cross-race effect, is also called the own-race effect, other-race effect, own race bias, or interracial face-recognition deficit. D. Stephen Lindsay and colleagues note that results in these studies could be due to intrinsic difficulty in recognizing the faces presented, an actual difference in the size of cross-race effect between the two test groups, or some combination of these two factors. Overall, Shepherd reported a reliable positive correlation between the size of the effect and the amount of interaction subjects had with members of the other race. This correlation reflects the fact that African-American subjects, who performed equally well on faces of both races in Shepherd's study, almost always responded with the highest possible self-rating of amount of interaction with white people, whereas white counterparts displayed a larger other-race effect and reported less other-race interaction. This difference in rating was statistically reliable. Challenging the cross-race effect Cross-race effects can be changed through interaction with people of other races. Other-race experience is a major influence on the cross-race effect. A series of studies revealed that participants with greater other-race experience were consistently more accurate at discriminating other-race faces than participants with less experience. The own-race effect appears related to increased ability to extract information about the spatial relationships between different facial features. A deficit occurs when viewing people of another race because visual information specifying race takes up mental attention at the expense of individuating information. Further research using perceptual tasks could shed light on the specific cognitive processes involved in the other-race effect. Similarly, men tend to recognize fewer female faces than women do, whereas there are no sex differences for male faces. If made aware of the own-race effect prior to the experiment, test subjects show significantly less, if any, of the own-race effect. ==Autism==
Autism
Autism spectrum disorder is a comprehensive neural developmental disorder that produces social, communicative, and perceptual deficits. Individuals with autism exhibit difficulties with facial identity recognition and recognizing emotional expressions. These deficits are suspected to spring from abnormalities in early and late stages of facial processing. Speed and methods People with autism process face and non-face stimuli with the same speed. In non-autistic individuals, a preference for face processing results in a faster processing speed in comparison to non-face stimuli. People with autism direct their gaze primarily to the lower half of the face, specifically the mouth, varying from the eye-trained gaze of non autistic people. This deviation does not employ the use of facial prototypes, which are templates stored in memory that make for easy retrieval. Additionally, individuals with autism display difficulty with recognition memory, specifically memory that aids in identifying faces. The memory deficit is selective for faces and does not extend to other visual input. These deficiencies can be seen in infants as young as nine months. Furthermore, research suggests a link between decreased face processing abilities in individuals with autism and later deficits in theory of mind. While typically developing individuals are able to relate others' emotional expressions to their actions, individuals with autism do not demonstrate this skill to the same extent. This causation, however, resembles the chicken or the egg dispute. Some theorize that social impairment leads to perceptual problems. In terms of face identity-recognition, compensation can include a more pattern-based strategy, first seen in face inversion tasks. == Schizophrenia ==
Schizophrenia
Schizophrenia is known to affect attention, perception, memory, learning, processing, reasoning, and problem solving. Schizophrenia has been linked to impaired face and emotion perception. People with schizophrenia demonstrate worse accuracy and slower response time in face perception tasks in which they are asked to match faces, remember faces, and recognize which emotions are present in a face. Schizophrenia patients report more feelings of strangeness when looking in a mirror than do normal controls. Hallucinations, somatic concerns, and depression have all been found to be associated with self-face perception difficulties. ==Other animals==
Other animals
Neurobiologist Jenny Morton and her team have been able to teach sheep to choose a familiar face over unfamiliar one when presented with two photographs, which has led to the discovery that sheep can recognize human faces. Archerfish (distant relatives of humans) were able to differentiate between forty-four different human faces, which supports the theory that there is no need for a neocortex or a history of discerning human faces in order to do so. Pigeons were found to use the same parts of the brain as humans do to distinguish between happy and neutral faces or male and female faces. ==Artificial intelligence==
Artificial intelligence
Much effort has gone into developing software that can recognize human faces. This work has occurred in a branch of artificial intelligence known as computer vision, which uses the psychology of face perception to inform software design. Recent breakthroughs use noninvasive functional transcranial Doppler spectroscopy to locate specific responses to facial stimuli. Such a system provides for brain-machine interface for facial recognition, referred to as cognitive biometrics. Another application is estimating age from images of faces. Compared with other cognition problems, age estimation from facial images is challenging, mainly because the aging process is influenced by many external factors like physical condition and living style. The aging process is also slow, making sufficient data difficult to collect. Nemrodov In 2016, Dan Nemrodov conducted multivariate analyses of EEG signals that might be involved in identity related information and applied pattern classification to event-related potential signals both in time and in space. The main target of the study were: • evaluating whether previously known event-related potential components such as N170 and others are involved in individual face recognition or not • locating temporal landmarks of individual level recognition from event-related potential signals • figuring out the spatial profile of individual face recognition For the experiment, conventional event-related potential analyses and pattern classification of event-related potential signals were conducted given preprocessed EEG signals. This and a further study showed the existence of a spatio-temporal profile of individual face recognition process and reconstruction of individual face images was possible by utilizing such profile and informative features that contribute to encoding of identity related information. == Genetic basis ==
Genetic basis
While many cognitive abilities, such as general intelligence, have a clear genetic basis, evidence for the genetic basis of facial recognition is fairly recent. Current evidence suggests that facial recognition abilities are highly linked to genetic, rather than environmental, bases. Early research focused on genetic disorders which impair facial recognition abilities, such as Turner syndrome, which results in impaired amygdala functioning. A 2003 study found significantly poorer facial recognition abilities in individuals with Turner syndrome, suggesting that the amygdala impacts face perception. This finding was supported by studies which found a similar difference in facial recognition scores and those which determined the heritability of facial recognition to be approximately 61%. For hereditary prosopagnosics, an autosomal dominant model of inheritance has been proposed. Research also correlated the probability of hereditary prosopagnosia with the single nucleotide polymorphisms == Social perceptions of faces ==
Social perceptions of faces
People make rapid judgements about others based on facial appearance. Some judgements are formed very quickly and accurately, with adults correctly categorising the sex of adult faces with only a 75ms exposure and with near 100% accuracy. The accuracy of some other judgements are less easily confirmed, though there is evidence that perceptions of health made from faces are at least partly accurate, with health judgements reflecting fruit and vegetable intake, body fat and BMI. People also form judgements about others' personalities from their faces, and there is evidence of at least partial accuracy in this domain too. Valence-dominance model The valence-dominance model of face recognition is a widely cited model that suggests that the social judgements made of faces can be summarised into two dimensions: valence (positive-negative) and dominance (dominant-submissive). A recent large-scale multi-country replication project largely supported this model across different world regions, though found that a potential third dimension may also be important in some regions and other research suggests that the valence-dominance model also applies to social perceptions of bodies. ==See also==
tickerdossier.comtickerdossier.substack.com