With reference to both attentional and pre-attentional processing, describe how the human visual system recognises faces and objects.
Vision allows people to know the contents and layouts of their environment, a significant function being object recognition (Yantis, 1998). It is important to remember the difference between recognising and perceiving an object, as perceptual recognition allows us to recognise an object in different lighting, from different angles and when certain parts are hidden. On the contrary, semantic recognition permits us to know the function of an object, and to recall relationships that we have made with it (Warrington and Tyler, 1978). A mechanism that contributes to these processes is called visual attention which is the ability to focus awareness on one stimulus, thought, or action while ignoring other irrelevant stimuli, thoughts, and actions (Gazzaniga, Ivry and Mangun, 2016). Throughout this essay I will focus on how the human visual system recognises objects, for example through object constancy, view-dependent vs. view-invariant recognition, and the geon theory. While also examining how this differs to recognising faces, for instance, through configuration, within category discrimination, and the inversion effect. Lastly, I shall consider how due to the limited capacity of our brains, not all visual input can be processed at once. Therefore, focusing on attentional and pre-attentional processing will distinguish how the visual system functions accordingly.
Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Furthermore, a second method to object constancy is through observing critical features through feature detectors, which look at lines, curves and angles for instance (Pinker, 1984). For example, the representation for the letter ‘A’ would be identified by its right and left leaning segments, its horizontal segment and its upward acute angle (Treisman, 1969). The problem with this recognition model is that it is difficult to show how a natural shape can be interpret, such as defining an elephant through lines, curves and segments. However, in this instance an individual would have to look at features unique to that shape, such as the elephant’s trunk. In addition, critical features and template matching have been used as methods of facial recognition. Implying that facial features are enough for discrimination and suggesting that objects and faces share the same recognition processes. Brunelli and Poggio (1993) investigated performance techniques for face recognition from images of frontal views. They found that critical features allowed for higher recognition speed and smaller memory requirements, however template matching proved superior and simpler.
Moreover, the operation that allows people to integrate features into a perceived object is through visual search, consisting of pre-attentional and attentional processing (Treisman and Gelade, 1980). Treisman and Gelade (1980) proposed a two-stage model called the feature-integration theory of attention to explain the difference between serial and parallel stages of processing. Firstly, it is stated that basic features such as colour, size and orientation can be analysed automatically and early on (pre-attentively). For example, Shen, Reingold and Pompun (2003) demonstrated how features should be able to ‘pop out’ in visual search. In their experiment they used green X’s as a target with red X’s, and green O’s as distractors. They found that when the test displayed an equal number of same colour and same shape distractors, this caused longer response times than did the tests with few same colour or few same shape distractors. Similarly, the pop out effect can be applied to recognising faces. Stalans and Wedding (1985) used faces showing anger and happiness to measure the time it took for people to recognise them. They found that positive facial expressions are more discrete and less ambiguous than the negative expressions. This allowed participants to process positive emotions faster and respond more quickly.
Secondly, the feature-integration theory states that objects are identified at a later, separate processing stage because it requires focused attention to combine the features to perceive the whole object (Treisman and Gelade, 1980). To combine the features into an object, attention selects within a ‘master map’ of locations that shows where all the features have been detected. Then, when attention is focused on a location of the master map, it allows automatic retrieval of whatever features are active in that area. Therefore, associations are made between the object and prior knowledge, which results in identification of the object (Treisman, 1988). Furthermore, to support Treisman’s Feature Integration Theory (FIT) researchers refer to patients with Balint’s syndrome. Victims with bilateral parietal damage show severe binding problems and difficulty in seeing more than one object at a time (Robertson, Treisman, Friedman-Hill and Grabowecky, 1997). For example, patient RM showed binding problems which is necessary for visual search for a target defined by a conjunction of features. As this was impaired, search for the absence of a feature, such as O’s among Q’s was also impaired, however search for the presence is not (Q’s among O’s). Also, RM cannot access spatial information, suggesting that feature binding to perceive the whole object relies on a late stage where spatial information is made accessible (Robertson et al, 1997).
Spatial attention influences the processing of the visual system by allowing humans prioritise an area within the visual field. Spatial attention can be focused by the spotlight metaphor, claiming that visual attention is limited to spatial size and anything outside the spotlight will not be processed (Posner, 1980). Stark, Grafman and Fertig (1997) studied a restricted spotlight of attention in visual object recognition using a patient with an impairment in visuospatial skills. They found that patient NJ failed to identify an object based on the smaller component of the object, such as calling a picture of a flower a ‘leaf’. This suggests that the spotlight of attention in NJ is narrowed, making it possible that only those parts of the object which fell within his spotlight were identified. To clarify this, NJ was asked to identify sixty-two-line drawings, presented in small and large sizes. NJ identified more of the small items than the larger counterparts, confirming a narrowed area over which visual processing can take place.
The human visual system can recognise objects based of the spatial organisation of three dimensional shapes. Marr and Nishihara (1978) established a theory of object recognition involving a means for identifying the natural axes of a shape in its image. Firstly, they believe that shapes have natural coordinate systems and that a shapes axes is defined by elongation, symmetry and rotation. Axis information provides a means by which the parts of an object can be related to one another across viewing angles and each object axis creates a generalised cone (Marr and Nishihara, 1978). Generalised cones can have different lengths, widths, and arrangements which form to create different types of objects. Therefore, once an objects axes are recognised, the objects can be placed into different categories (Marr and Nishihara, 1978). However, if the axes in the image are hidden, it may be difficult to obtain any information. Marr (1977) suggested a solution to this by stating that the image of a generalised cone’s axis may still be detectable depending on how covered up it is.
In addition, Biederman (1987) proposed the theory Recognition by Components (RBC). This presented the idea that generalised cone components, called geons can be obtained from properties of edges in two-dimensional images. These properties include curvature, collinearity and symmetry which are all invariant across viewing point. The theory states that representation comes from combinations of these geons. Therefore, if an arrangement of two or three geons are recovered from the input, objects can be recognised even when they are hidden or rotated (Biederman, 1987). Due to this being a viewpoint-independent theory, the issue is determining how we recognise patterns that do not have clear axes, such as clouds. For example, some objects, such as faces are recognised more effectively when shown in a certain orientation, relying on viewpoint-dependent representations (Hill, Schyns and Akamatsu, 1997).
To recognise a face from a viewpoint-dependent representation, people may view the face holistically. Tanaka and Farah (1993) proposed that information about the features of a face and their configuration (spatial distances between features) are combined in the face representation. They conducted a study by creating two configurations; a face with eyes close together and one with eyes far apart. After participants had studied the faces, they were tested for their recognition of features. These features were shown in isolation, in a new face configuration, and in an old face configuration. They discovered that subjects recognised features best when presented in the old face configuration and least when the features were isolated. These findings correspond with Diamond and Carey (1986) theory that object configurations can be classified by first-order and second-order relational properties. First-order properties are categorical relations of object features, such as ‘’the eyes are above the nose’’. Second-order properties specify the distances of first-order properties, such as the mouth below the nose being ‘’wide’’. According to the holistic theory, participants should have recognised facial features better in the old configuration condition because second-order relational information is preserved. Whereas, in the new configuration condition, second-order relational information is changed (Tanaka and Farah, 1993).
Find Out How UKEssays.com Can Help You!
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our academic writing services
Furthermore, configural cues can be disrupted by the face inversion effect, an effect supporting that recognition is viewpoint-dependent. The face inversion effect results from the disruption to processing of configural information that is sensitive to face orientation (Freire, Lee and Symons, 2000). Freire, Lee and Symons (2000) conducted an experiment by manipulating a photo of a face by altering the location of the eyes and mouth. In a second experiment they manipulated the same photo by replacing the main facial features with different facial features. Participants were asked to discriminate between the face images in upright or inverted displays. The results from the first experiment showed that participants were able to discriminate upright faces differing only in configural information, but when the same face was inverted, discrimination was poor. On the other hand, the results from the second experiment displayed that inverting the face had no effect on discrimination ability when the faces differed mainly in facial features. This result supports that featural information of a face is less influenced by inversion than configural information. Also, it suggests that faces are processed in a more configural manner than objects, which can often be distinguished by the identity of their parts.
Research indicates that the human visual system recognises faces and objects via two different processes. For example, functional brain imagining studies have proposed that there are cortical areas specialised for face perception. Specifically, a part of the fusiform gyrus in the posterior temporal lobe is known to show stronger activation during face perception tasks than tasks that involve perception of other objects (Sergent, Ohta and MacDonald, 1992). However, other studies have suggested that the face-selective and object-selective parts of the ventral visual pathway are not single regions but are multiple regions that act together to recognise faces and objects. This was proposed by Haxby, Ungerleider, Clark, Schouten, Hoffman and Martin (1999) who investigated the neural effect of inversion on face perception using fMRI. They tested whether inverted faces stimulate less activity in cortical regions most responsive to nonface objects, by comparing the effect to another class of objects (houses).
The results showed that an inverted face increased activity in parts of the ventral object vision pathway that are more responsive to nonface objects. Also, face inversion did not diminish the response to faces in face-selective regions, however it did not effectively engage face perception mechanisms either. Overall, this demonstrates that neural systems for the perception of objects are engaged to facilitate the perception of inverted faces, but inverted faces still occupy the face perception system (Haxby et al, 1999). This theory is supported by patients with selective impairment of face recognition (prosopagnosia) because their recognition of inverted faces can be normal. This suggests that inverted face perception may be resolved by their intact object perception mechanisms (Haxby et al, 1999).
Victims of prosopagnosia are impaired at face recognition, but unimpaired at object recognition. This could be down to face recognition requiring the greater need to visually discriminate, compared to object recognition. Farah, Levinson and Klein (1995) carried out two experiments on face perception and within-category discrimination in prosopagnosia. In both experiments, the object recognition task involved discriminating different models of a generic or basic object level category. In the first experiment, for each correct item there was a foil from within the same generic category. In the second experiment, recognition of many models of a single category (eye glass frames), was contrasted with recognition of an equivalent number of faces. The results from the first experiment showed that LH recognised objects better than faces and performed better at recognising the objects than the normal participants. This supports the idea that faces, and objects are not recognised using the same recognition system, instead supporting the existence of a mechanism focused on facial recognition. In the second experiment normal subjects found face recognition easier than eye glass frame recognition, however, LH was more impaired at face recognition than at eye glass frame recognition.
In conclusion, there is no agreed answer to whether faces and objects are recognised by the same process or two different processes. However, a lot of evidence points to the direction that they are two separate processes. For example, patients can have face agnosia without object agnosia, and others can have object agnosia without face agnosia. This demonstrates that these are two forms of processing as one can be functioning without the other. Furthermore, normal subjects are better at recognising upright faces than inverted faces, yet a victim of prosopagnosia is better at recognising inverted faces than upright faces. This provides evidence that humans have a neurologically localised module for upright face recognition. Overall, compared to faces, object recognition tends to place a greater emphasis on object constancy, generalised cones and the Geon theory. Nevertheless, under the correct circumstances, faces can be recognised through routes of object constancy, such as template matching and critical feature detectors. Therefore, while it is generally accepted that objects and faces use different processing methods, it is unclear whether these different methods are met by different systems.
References
- Biederman, I. (1987). Recognition-by-components: a theory of human image understanding. Psychological review, 94(2), 115.
- Brunelli, R., & Poggio, T. (1993). Face recognition: Features versus templates. IEEE transactions on pattern analysis and machine intelligence, 15(10), 1042-1052.
- Diamond, R., & Carey, S. (1986). Why faces are and are not special: an effect of expertise. Journal of Experimental Psychology: General, 115(2), 107.
- DiCarlo, J. J., & Cox, D. D. (2007). Untangling invariant object recognition. Trends in cognitive sciences, 11(8), 333-341.
- Ellis, A. W., & Young, A. W. (2013). Human cognitive neuropsychology: A textbook with readings. Psychology Press.
- Farah, M. J., Levinson, K. L., & Klein, K. L. (1995). Face perception and within-category discrimination in prosopagnosia. Neuropsychologia, 33(6), 661-674.
- Freire, A., Lee, K., & Symons, L. A. (2000). The face-inversion effect as a deficit in the encoding of configural information: Direct evidence. Perception, 29(2), 159-170.
- Gazzaniga, M., Ivry, R., & Mangun, G. (2016). Cognitive neuroscience. New York: W W Norton.
- Haxby, J. V., Ungerleider, L. G., Clark, V. P., Schouten, J. L., Hoffman, E. A., & Martin, A. (1999). The effect of face inversion on activity in human neural systems for face and object perception. Neuron, 22(1), 189-199.
- Hill, H., Schyns, P. G., & Akamatsu, S. (1997). Information and viewpoint dependence in face recognition. Cognition, 62(2), 201-222.
- Marr, D. (1977). Analysis of occluding contour. Proc. R. Soc. Lond. B, 197(1129), 441-475.
- Marr, D., & Nishihara, H. K. (1978). Representation and recognition of the spatial organization of three-dimensional shapes. Proc. R. Soc. Lond. B, 200(1140), 269-294.
- Pinker, S. (1984). Visual cognition: An introduction. Cognition, 18(1-3), 1-63. http://dx.doi.org/10.1016/0010-0277(84)90021-0
- Posner, M. I. (1980). Orienting of attention. Quarterly journal of experimental psychology, 32(1), 3-25.
- Robertson, L., Treisman, A., Friedman-Hill, S., & Grabowecky, M. (1997). The interaction of spatial and object pathways: Evidence from Balint’s syndrome. Journal of Cognitive Neuroscience, 9(3), 295-317.
- Sergent, J., Ohta, S., & MACDONALD, B. (1992). Functional neuroanatomy of face and object processing: a positron emission tomography study. Brain, 115(1), 15-36.
- Shen, J., Reingold, E. M., & Pomplun, M. (2003). Guidance of eye movements during conjunctive visual search: the distractor-ratio effect. Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale, 57(2), 76.
- Stalans, L., & Wedding, D. (1985). Superiority of the left hemisphere in the recognition of emotional faces. International Journal of Neuroscience, 25(3-4), 219-223.
- Stark, M. E., Grafman, J., & Fertig, E. (1997). A restricted ‘spotlight’of attention in visual object recognition. Neuropsychologia, 35(9), 1233-1249.
- Tanaka, J. W., & Farah, M. J. (1993). Parts and wholes in face recognition. The Quarterly journal of experimental psychology, 46(2), 225-245.
- Treisman, A. (1988). Features and objects: The fourteenth Bartlett memorial lecture. The quarterly journal of experimental psychology, 40(2), 201-237.
- Treisman, A. M. (1969). Strategies and models of selective attention. Psychological review, 76(3), 282.
- Treisman, A. M., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive psychology, 12(1), 97-136.
- Vallar, G. (1998). Spatial hemineglect in humans. Trends in cognitive sciences, 2(3), 87-97.
- Vecera, S. P., & Rizzo, M. (2003). Spatial attention: normal processes and their breakdown. Neurologic clinics, 21(3), 575-607.
- Warrington, E. K., & Taylor, A. M. (1978). Two categorical stages of object recognition. Perception, 7(6), 695-705.
- Yantis, S. (1998). Control of visual attention. attention, 1(1), 223-256.
Cite This Work
To export a reference to this article please select a referencing style below: