--> Europe's Journal of Psychology ejop.psychopen.eu | 1841-0413 Research Reports Facial Memory: The Role of the Pre-Existing Knowledge in Face Processing and Recognition

Faces are visual stimuli full of information. Depending upon the familiarity with a face, the information we can extract will differ, so the more familiarity with a face, the more information that can be extracted from it. The present article reviews the role that pre-existing knowledge of a face has in its processing. Here, we focus on behavioral, electrophysiological and neuroimaging evidence. The influence of familiarity in early stages (attention, perception and working memory) and in later stages (pre-semantic and semantic knowledge) of the processing are discussed.  The differences in brain anatomy for familiar and unfamiliar faces are also considered. As it will be shown, experimental data seems to support that familiarity can affect even the earliest stages of the recognition.

Recognizing faces is one of the most common cognitive processes we carry out in our daily life.The efficiency of this process allows us to interact with others and therefore survive in our environment.It is so quick and apparently effortless that most people rarely consider either its importance or its complexity.
Imagine that you are walking along the street and you see somebody.With a quick glance at their face, you are able to extract information such as the person's gender and emotional state.Moreover, you can say whether the face is familiar or not and, if it is, you may retrieve additional information such as the person's interests and hobbies, their occupation or name.In this sense, the information we can extract from a face varies depending on our experience and knowledge with that particular face.
There are additional differences between familiar and unfamiliar faces.If we see a famous person robbing a bank, we will probably identify that person correctly in a subsequent eyewitness identification line-up.However, as previous research has shown, several problems arise when trying to identify a non-famous person (Loftus, 1996).
Thus it seems that we are able to remember familiar people better than unfamiliar people.Some researchers consider that the processes we use to explore familiar and unfamiliar faces are different in its nature (Buttle & Raymond, 2003;Megreya & Burton, 2006).According to them, processing unfamiliar faces, we would use a more featural strategy which involves the analysis of facial features, such as nose, eyes, etc.On other hand, to process familiar faces we rely more on a configural or a holistic strategy 1 , which involve the analysis of the face as a whole (Farah, Wilson, Drain, & Tanaka, 1995).This argument is supported by the fact that we are able to recognize familiar faces in different poses or expressions, whereas we have problems to carry out this action with faces that have been encountered briefly.Additionally, it has been suggested that for the recognition of familiar faces, internal features (such as nose, eyes and mouth) are more important than external features (such as hairstyle) (Ellis, Shepherd, & Davies, 1979).On the other side, Ellis et al. (1979) found that for unfamiliar faces both external and internal features are of equal importance (although see Bruce et al., 1999).
It is important to distinguish between face attention and perception and face recognition and identification.Face attention and perception would constitute the firsts stages of the processing of faces.Face attention refers to the way a face grabs the attention.On the other hand, face perception consists in perceiving a visual object as a face.
As with other visual objects, attention and perception of faces would determine their storage in working memory.
For its part, face recognition makes reference to recognize a previously seen face.It would not require semantic information about the face, but simply familiarity with it.Finally, face identification makes reference to the activation of episodic and semantic information associated with the face.Face recognition and identification would constitute the latter stages of the processing of faces.
The present review tries to shed light on the role that the pre-existing knowledge with a face plays in its recognition.
To accomplish this, we focus on behavioral, electrophysiological and neuroimaging evidence.We first present two models of face processing.The assumptions these models make about the role of pre-existing knowledge in face processing are discussed.In the following section, the influence of familiarity in the early stages of processing (i.e.Attention, perception and working memory) will be presented.In the third section, this influence is considered in later stages (pre-semantic and semantic knowledge) of face processing.In the fourth section, the brain anatomy of familiar and unfamiliar faces is reviewed.Finally, some conclusions are drawn.

Two Models of Face Recognition
In this section, we are going to discuss two classic models of face processing: the Bruce and Young (1986) model and the interactive activation and competition model (Burton, Bruce, & Johnston, 1990).These two models make different predictions about the role of previous knowledge in face processing.

The Bruce and Young Model of Face Processing
The Bruce and Young model (1986), which is the most widely cited face processing model, suggests that previous experience with a face influences the middle and latter stages of recognition.This model is depicted in Figure 1.
The model is comprised of several modules, each of them representing a functional separate component (Bruce & Young, 1986).When we see a face, firstly, we form different representations of it.The structural encoding builds these representations.According to the authors, view-centered descriptions are used to analyze facial speech (Campbell, 2011) and facial expressions (Straube, Mothes-Lasch, & Miltner, 2011).On the other hand, expression-independent descriptions are more abstract representations which are used for the facial recognition.
Both the view-centered and the expression-independent descriptions are linked with the directed visual processing.
As it was said before, the expression-independent descriptions are used in the face recognition.This module provides information to the face recognition units (FRU).This system could be considered the mental "lexicon" for faces, because it stores the faces we know, each of them being represented by one unit (Bruce & Young, 1986).FRU are activated by any particular view of the face and the more activation, the more familiar the face Familiarity and Face Processing becomes.However, the role that FRU plays in the recognition is to tell us that the face we are looking at is familiar, but nothing more than that (Hole & Bourne, 2010).
FRUs have bidirectional connections with person identity nodes (PINs) which contain semantic information about the person (E.g., occupation, interest, etc...).As in FRU, each person has his/her own PIN.The bidirectional connections allow us to represent a face in our memory, for example, when a person gives us some semantic details about somebody.In this sense, PINs can be accessed from faces, but also from voices, names, or any particular pieces of information.On the other hand, names can just be accessed through PIN.Lastly, the cognitive system makes reference to further procedures which may play a role in face recognition, such as associative or episodic information.
How does previous knowledge influence the processing of faces?As can be followed from the different connections in the model, previous experience with a face will influence the middle and last stages of recognition; that is from FRUs and onwards.In this sense, this experience would not have any effect in the first stages of the processing of faces, that is in the structural encoding.

The Interactive Activation and Competition (IAC) Model
The IAC model (Burton et al., 1990) could be considered the last version of the Bruce and Young Model.In fact, both models make similar predictions, but the IAC model is more specific regarding the running and the relationships between the different components.The main frame of the model is depicted in the Figure 2.
The main constituent of the model are FRUs, PINs and semantic information units (SIUs).All these components are connected to each other.The model follows the connectionist logic.Units within a pool compete with each The IAC model of face recognition.Adapted from Burton et al. (1990) other, so when one of these units is activated, the rest are inhibited.The FRUs have the same function as in the original Bruce and Young model.So when we see a familiar face, an FRU for that particular face is activated.This would activate the PIN for that person, although PINs can also be activated by other information such as names (Name input units, NIUs; Hole & Bourne, 2010).If a PIN reaches a certain level of activation, that face would be categorized as familiar.In this sense, unlike the Bruce and Young model, familiarity decisions are considered to depend on the activation of PINs (Burton et al., 1990;Herzmann & Sommer, 2010).Lastly, SIUs contains semantic information such as, interest, occupation, nationality, etc.
Unlike the Bruce and Young model, the IAC model assumes that top-down feedback between subsequent stages aids the processing during the whole procedure.In this sense, person identity or even names can influence the first stage of the processing, that is in the structural encoding (Herzmann & Sommer, 2010).

Familiarity and Early Stages of the Processing of Faces
Attention, perception and working memory constitute these early stages.As we shall see, the evidence demonstrating the influence of familiar faces in early stages of the processing of faces comes from behavioral, electrophysiological and neuroimaging data.One paradigm which allows studying the influence of familiarity in these stages is the attention blink paradigm.In this paradigm, stimuli are presented rapidly one at a time in the centre of the screen, and participants have to identify two subsequent targets (Raymond, Shapiro, & Arnell, 1992).
The attentional blink effect refers to a disadvantage to identify the second stimuli when the lag with the first one Familiarity and Face Processing 234 is less than 500 msec (Raymond et al., 1992).For example, Jackson and Raymond (2006) used an attentional blink paradigm to study the role of attention in face identification.They presented to their participants a rapid series of faces with one abstract pattern image imbedded.Participants have to detect the abstract pattern image and one predefined face.Jackson and Raymond (2006) found attentional blink effects with unfamiliar faces but not with famous faces (highly familiar faces).These results suggest that we need less attentional resources to process familiar faces.
Tong and Nakayama (1999) used a visual search procedure and participants had to detect a specific face among a heterogeneous array of other faces.The authors showed that when the participant's own face was used, they were able to detect it more rapidly than a face which the participant was unfamiliar with.This study suggests that the processing for highly familiar faces, such as one's own face, might be faster than for unfamiliar faces.One problem of Tong and Nayakama's study is that they required their participants to make an explicit recognition, so it is not clear whether the facilitation with the own face is due to an improvement in perceptual processing or to explicit recognition (Buttle & Raymond, 2003).To explore this issue, Buttle and Raymond (2003) presented a pair of faces which were replaced rapidly by another display with two faces.One of the faces did not vary between displays, but the other one did.They observed that when the change between displays involved a famous face, performance was significantly better, but only when the change was presented in the left hemispace.Note that the task was to detect the change, rejecting that the facilitation in the processing is due to the explicit recognition.
This "superfamiliarity" effect supported that highly familiar faces produce an improvement in early stages of the recognition, in this case, in perceptual processing.
Further research seems to indicate that familiarity with faces aids their preservation in working memory (Jackson & Raymond, 2008; although see Pashler, 1988).Jackson and Raymond (2008) presented to their participants a memory array of faces during a short period of time followed by a comparison test array.Participants had to decide whether the comparison test array was the same or different from the memory array.Concurrent with this task, participants had to repeat a couple of digits in order to suppress the use of verbal working memory.The authors showed that the working memory performance was better for famous than for unfamiliar faces.However, it is possible that in some situations working memory performance for unfamiliar faces may be better than for familiar faces.As it was noticed in the introduction, unfamiliar face recognition is highly affected by changes in pose or expression, so it is more difficult to match two photographs of an unfamiliar person than it is to match two photographs of a familiar person (Bruce et al., 1999).On the other hand, some researchers have shown that visual similarity is inversely proportional to the retention in working memory (i.e.Logie, Della Sala, Wynn, & Baddeley, 2000).In this sense, the working memory span for different pictures of the same unfamiliar person face should be bigger than for different pictures of the same familiar face.
Several brain event-related-potentials (ERPs) have been related to different stages of face recognition (for a review see Schweinberger & Burton, 2003): (1) the N170, which is involved in the initial detection of a face, (2) the N250r, which is related to face recognition, and (3) the N400, which reflects access to semantic knowledge about a person.ERP studies have also proved the influence of familiarity in early stages of face processing (Caharel, Fiori, Bernard, Lalonde, & Rebai, 2006;Herzmann & Sommer, 2010).Some researchers have shown that the occipito-temporal N170 component is linked to late stages of structural encoding (Eimer, 2000).This component, although initially was considered to be specific for faces, is evoked for any kind of visual object, however it is larger in faces (Herzmann & Sommer, 2010).It was considered to be familiarity-independent, but Caharel et al. (2006), using personally familiar faces, showed an effect of face familiarity in the N170 component.N170 amplitude was larger for learned and non-studied famous faces than for new unfamiliar.Nessler, Mecklinger, and Penney (2005) presented to their participants familiar and unfamiliar faces.Participants had to decide whether the face displayed was famous or not.They showed that the differences in ERP waves between famous and non-famous faces started around 200 milliseconds.Moreover, the differences in ERP waves between first and second presentation of non-famous faces, which seems to indicate perceptual fluency, emerged after 300 milliseconds.These results suggest that familiarity has an influence before structural encoding processes is completed (Nessler et al., 2005).
In summary, behavioral and ERP data seems to indicate that familiar faces affect early stages of the face processing, that is late stages of the processing have a strong influence in attentional and perceptual processes.The Bruce and Young (1986) model assumes independence between different modules, so these data cannot be explained by this model.On the other hand, they can be fitted better by the IAC model (Burton et al., 1990).This model assumes interactions between the different stages of the processing of faces, so top-down feedback aids during the whole processing.In this sense, familiarity and identity influence early stages of the processing of faces, such as attention and perception.

Familiarity and Late Stages of the Processing of Faces
In this section, the influence of familiarity in pre-semantic and semantic knowledge with behavioral and ERP evidence is discussed.Pre-semantic and semantic knowledge constitute these later stages of the processing of faces.Pre-semantic knowledge involves knowing that the face which is being looked at, is familiar, but nothing more.In terms of face processing models, it would coincide with the activation of FRUs (Bruce & Young, 1986) or PINs (Burton et al., 1990).On the other hand, the semantic information refers to the conceptual information about the person, such as occupation, interest, etc.In terms of these models, it would coincide with the activation of PINs (Bruce & Young, 1986) or SIUs (Burton et al., 1990).Unfamiliar faces have scarce semantic information.
If an unfamiliar face is presented, the semantic information for that face cannot be accessed, simply because it does not exist.On the other hand, for familiar faces, depending of their degree of familiarity, pre-semantic (FRUs) and/or semantic information (SIUs) will be accessed.As mentioned previously, the IAC model was created to specify with more detail how FRUs and PINs work.Due to this specification, all data concerning these two modules are, in general terms, better explained for this model than for the Bruce and Young's.Boehm, Klostermann, Sommer, and Paller (2006) presented to their participants upright and inverted familiar (famous) and unfamiliar faces.Participants had to decide whether that face had been presented before.Boehm et al. (2006) showed that inverted familiar faces primed upright familiar faces, however inverted unfamiliar faces did not prime upright unfamiliar faces.These results showed that familiar inverted faces are able to access FRUs, so the inversion effect must affect the structural encoding.If so, the inversion effect should have an effect in N170 ERP component, which, as was said before, is considered to be linked with structural encoding, in both in famous and non-famous faces, but not in later components of recognition such as N250r or N400 (see below).To our knowledge, this issue has not been explored.Familiarity and Face Processing 236 Burton, Kelly, and Bruce (1998) have shown that faces prime names but only in semantic tasks.In their first experiment, participants were presented in the first phase with different faces and they had to indicate whether the face was familiar.In a second phase, participants were presented with a series of written names; some of them belonged to faces displayed in the first phase.Participants had to indicate whether the name was familiar.
No repetition priming effects were found in the second phase.These results are easily explained by the IAC model.
Face familiarity would strengthen the link between FRUs and PINs.However, this does not have any consequence to recognize the name, because the link between NRUs and PINs has not been strengthened in the first phase.Burton et al. (1998) conducted a second experiment.The design of the experiment was exactly the same, but participants made a semantic decision in both phases.In this case names were primed by faces.These results are also explained by the IAC model: retrieval of semantic information from a face requires the activation of the FRU, PIN and SIUs for that particular face.In this sense, the links from FRU to PIN and from PIN to SIUs are strengthened.In the second phase, retrieval semantic information from the name requires the activation of the NRU, PIN and SIUs for that specific name.In the first phase, the links PIN to SIUs were already activated by faces, so priming will occur.These results have two important implications.In the first place, semantic priming for faces, but not episodic priming, seems to be cross-domain (i.e.face to name; see Ellis, Young, & Flude, 1990).
In the second place, previous experience with a face aids with the processing of semantic and name information.
Some ERP components have been linked to these late stages of recognition.One of them is the N250r, which appears 250-300 millisecond stimulus onsets.This component consists of an increase of positivity at frontal electrodes and an increase of negativity at temporal sites.It seems to be related to the activation of presemantic information, because it is smaller or absent for unfamiliar faces (Herzmann & Sommer, 2010;Schweinberger, Pickering, Jentzsch, Burton, & Kaufmann, 2002).Tanaka, Curran, Porterfield, and Collins (2006) presented to their participants two faces: the participant's own face and that of one unfamiliar person (always the same).They observed that a clear N250r was evident at the beginning of the experiment for the participant's own face.In the second half of the experiment, the N250r appeared for the unfamiliar (now familiar) person.This result shows that this component appears for newly learned faces.Some researchers have shown that that the N250r does not appear in conditions of semantic priming (i.e.Hillary Clinton's face precedes Bill Clinton's), giving additional evidence that this component is associated with the pre-semantic activation, but not with additional semantic information (Schweinberger, 1996).
Unlike the N250r, the N400 component does reflect semantic knowledge about the person.It is a general-domain component, which is elicited for any kind of stimuli with semantic information such as names and faces (Schweinberger, 1996).For this reason, this component is considered to reflect the activation of PINs (Bruce and Young, 1986) or SIUs (Burton et al., 1990).It is reflected by increased in centro-parietal positivity at 300-600 millisecond stimulus onsets.As it might be expected for the amount of information provided, it is larger for familiar than for unfamiliar faces (Paller et al., 2000;Schweinberger et al., 2002).For example, Paller et al. (2000) presented to their participants faces with and without semantic information.In a second phase, participants had to decide whether the face displayed had been presented in the first phase.The authors observed that faces learned with semantic information showed larger N400 potentials than faces learned without semantic information.
Further components, more related with explicit recognition memory processes, have been specified.These components are associated with old/new effects in recognition paradigms (see Yonelinas, 2002) and, although they do not say anything about face processing mechanisms per se, they might inform about the processes Estudillo 237 humans use to retrieve faces.The FN400 component is considered to be related with familiarity process; that is not being influenced by the recollection of the study episode (Rugg & Curran, 2007).It appears around 300-350 milliseconds in anterior localizations.On the other hand, the Late Positive component (LP) is found around 400-800 milliseconds.This component is thought to reflect recollection processes.Curran and Hancock (2007) presented to their participant unfamiliar faces with information about their occupation.
They observed that the FN400 effect was not affected by the occupation, but the LP was larger for faces remembered with their occupation.Herzmann and Sommer (2010) replicated these results but they also found a previous old-new effect starting around 250 milliseconds in familiar faces, but independent of the amount of biographical facts.
In conclusion, behavioral and EPR studies show an influence of previous knowledge in pre-semantic and semantic stages of recognition.Moreover, it seems that either familiarity or recollection processes play an important role in the retrieval, depending of the amount of biographical knowledge associated with the face.

Unfamiliar and Familiar Faces in the Brain
Between others studies, the aforementioned Buttle and Raymond's (2003) told us about the lateralization of face processing.On the other hand, ERP studies provide information about the time course of face processing.However, neither of them say anything about the different localizations of the information associated with familiar and unfamiliar faces.
Neuroimaging studies have provided information about the different processes involved in face recognition.Some researchers have shown the involvement of two main areas during the encoding of faces: the occipital face area (OFA) and the fusiform face area (FFA).For example, Miall, Gowen, and Tchalenko (2009) presented to their participants line-drawings of faces.They observed activity in both OFA and FFA during the encoding processes.Andrews and Schluppeck (2004) presented to their participants Mooney faces, that is images without meaning which can be perceived like faces.Interestingly, they found FFA activation but only when the stimuli was perceived like a face.This result suggests that the FFA is activated when stimuli are perceived as faces, or they have similar perceptual features (see Gauthier, Tarr, Anderson, Skudlarski, & Gore, 1999).Additional researches have shown that the FFA and the OFA are activated to the same degree in different view of faces (Chen, Kao, & Tyler, 2007).
This result seems to supports that both FFA and OFA process the invariant properties of faces (Hole & Bourne, 2010), however this result has not always been replicated (Pourtois, Schwartz, Seghier, Lazeyras, & Vuilleumier, 2005;Natu & O'Toole, 2011 for a review).On the other hand, Haxby, Hoffman, and Gobbini (2000) showed that these two areas are sensitive to familiarity (although see Pourtois et al., 2005).
Passarotti, Smith, DeLano, and Huang (2007) found a decrease in activation of the FFA for inverted faces, which was more predominant in the right FFA.The inversion effect is considered to disrupt configural processing, leaving featural processing relatively preserved (see Hole & Bourne, 2010).Moreover, as it was mentioned before, Buttle and Raymond's (2003) found superfamiliarity effects but just in the left hemispace, which suggests that both cerebral hemispheres deal with famous and non-famous faces in a different way.This result is supported by Tranel, Damasio, and Damasio (1997) who found that damage in the right anterior temporal cortex produced semantic recognition impairment in famous faces.Interestingly, this impairment is not observed in people with damage in the left anterior temporal cortex.Note that there is some evidence showing than famous faces would rely more on a configural strategy (Megreya & Burton, 2006).All these results are consistent with the hypothesis Familiarity and Face Processing 238 that the right hemisphere is specialized for configural information, whereas the left seems to be specialized for featural information.Leveroni et al. (2000) studied the different patterns of activation for famous faces and new learned faces.They found that the recognition of famous faces produced a broader pattern of activation in medial temporal, prefrontal and lateral lobe than the recognition of recently encoded faces.It seems that this differential pattern was due to the additional information for famous faces (name, occupation, etc).Interestingly, these areas are also activated during semantic tasks with other non-face objects (see Leveroni et al., 2000).Elfgren et al. (2006) studied the role of the medial temporal lobe (MTL) recognition of famous and unfamiliar faces.They presented to their participants famous and non-famous faces.In one condition, participants had to inform the face's gender.In a second condition, participants had to decide whether the face was famous.In both situations, famous faces produced more activation in the MTL.Moreover, when participants were asked to generate the name of the famous faces, an increased activity in the anterolateral left hippocampus was observed.
Turk, Rosenblum, Gazzaniga, and Macrae ( 2005) conducted an interesting study which tried to dissociate identity and semantic information.They presented to their participants famous faces with either the name (identity task) or with the occupation (semantic task).Participants had to indicate whether the information provided matched with the face.In the identity task, the activation was higher in the FFA.Interestingly, there were not exclusively activated areas for semantic decision.This data suggests that semantic information is contained within the face recognition areas.Although participants performed a matching task, it is possible that when a famous face is presented, participants automatically activate their name and occupation when they are not required (in the semantic and identity tasks, respectively), so it may be the case that the pattern of activation observed by Turk et al. ( 2005) is contaminated by this information.One interesting point to avoid this would be to use a learning phase where participants would have to learn new faces with either name or occupation.The test phase would be as in Turk et al. (2005) 2 .
In conclusion, several anatomical areas in the brain have been involved in the processing of faces.Moreover, it seems that the more information a face contains, the more areas are activated.Interestingly, some results suggest that late stages of processing have an influence in the FFA, which is considered to be involved in the first stages of the recognition, although as previously noted this result has not always been replicated (see Pourtois et al., 2005).

Conclusions
Although we are able to extract information from an unfamiliar face (sex, age, etc.), the amount of information we can extract from a familiar face is broader, but variable depending of our degree of knowledge about that face.
Cognitive models assume different roles of the previous knowledge.Bruce and Young's model assumes that the previous experience with a face will influence in pre-semantic and semantic levels but not in perceptual processes.
So knowing somebody's occupation does not have any effect in structural encoding.On the other hand, the IAC model proposes an interactive network in which top-down feedback between subsequent stages aids to the processing during the whole process of recognition.For that reason, semantic information or even names can facilitate the first stages of the processing of faces (see Herzmann & Sommer, 2010).
Behavioral, ERP and neuroimaging data seems to support the IAC model.In the case of behavioral data, it has been demonstrated that familiar faces are detected faster than unfamiliar.ERP data has shown that perceptual Estudillo 239 ERP components behave in a different way under familiar and unfamiliar faces.Lastly, neuroimaging data indicated that familiarity has a strong impact in perceptual areas.These data suggest that familiarity can affect even in the earliest stages of the recognition.
Some evidence seems to support a preference for configural processing in famous faces.This makes sense if we think that a person is not always seen with the same hairstyle, pose, etc.In this sense, to recognize somebody is more useful to rely in configural features of the face than in independent facial features.On the other hand, it must be noted that some studies have shown that inversion effect, which is considered to disrupt configural processing, affects familiar and unfamiliar faces to the same degree (Yarmey, 1971).This may be considered to support that both familiar and unfamiliar faces rely to the same degree on configural and featural processes.
However, Caharel et al. (2006) did find that familiar faces are affected more by the inversion effect.The differences between both studies may be that Caharel et al. (2006) used personally familiar faces, which are supposed to be strongly represented in the memory (Tong and Nakayama, 1999).Furthermore, as Goldstein and Chance (1980) showed class familiarity rather than familiarity with a face may be the critical factor for the inversion effect.In any case, evidence suggests that right hemisphere might be specialized in processing familiar faces, whereas the left hemisphere would process faces, but in a different way: processing featural facial information rather than configural information.
However, some questions remain unanswered.In this sense, an interesting question is what role the featural processes play in familiar faces.Imagine a familiar person who has a beauty mark on the cheek.Because, we know that person, that mark becomes something representative of that person, so it may be an important cue to recognize them.Interestingly, Greenberg and Goshen-Gottstein (2009) found that when we process our own face, we use featural processing.They did not replicate this in celebrity's faces.This result leaves open the possibility that over-learnt faces, such as one's own face or even personal familiar faces, in addition to holistic, featural processes would also be used.In this sense, it is possible that holistic and featural processes are, rather than opposite, complementary processes which do their utmost to recognize familiar faces.So the more familiar the face the more contribution both processes would have in recognition.Future research should explore this issue.

Figure 1 .
Figure 1. Bruce and Young's model.Adapted from Bruce and Young 1986 Figure 2.