--> Europe's Journal of Psychology ejop.psychopen.eu | 1841-0413 Research Reports The Effects of Eye-Closure and “Ear-Closure ” on Recall of Visual and Auditory Aspects of a Criminal Event

Previous research has shown that closing the eyes can facilitate recall of semantic and episodic information. Here, two experiments are presented which investigate the theoretical underpinnings of the eye-closure effect and its auditory equivalent, the “ear-closure” effect. In Experiment 1, participants viewed a violent videotaped event and were subsequently interviewed about the event with eyes open or eyes closed. Eye-closure was found to have modality-general benefits on coarse-grain correct responses, but modality-specific effects on fine-grain correct recall and incorrect recall (increasing the former and decreasing the latter). In Experiment 2, participants viewed the same event and were subsequently interviewed about it, either in quiet conditions or while hearing irrelevant speech. Contrary to expectations, irrelevant speech did not significantly impair recall performance. This null finding might be explained by the absence of social interaction during the interview in Experiment 2. In conclusion, eye-closure seems to involve both general and modality-specific processes. The practical implications of the findings are discussed.

found evidence for a combination of general and modality-specific interference caused by visual and auditory distractions in the interview environment.Specifically, they found that meaningless visual or auditory distractions during recall (i.e., Hebrew words popping up on a computer screen or being spoken via headphones, respectively) disrupted recall of both visual and auditory information compared to conditions in which participants looked at a blank computer screen or had their eyes closed.However, visual distractions had a greater impact on recall of visual information, whereas auditory distractions had a greater impact on recall of auditory information.

Experiment 1: Eye-Closure
Because the findings reported by Perfect et al. (2008) were mixed with regards to the modality issue, Experiment 1 was designed to shed more light on this issue.To enhance the ecological validity of the research, we examined the effect of eye-closure on recall of a violent event, instead of the mundane events used by Perfect et al. (2008).
Following from Baddeley and Andrade's (2000) findings, we predicted that looking at visual stimuli in the interview environment while trying to retrieve visual information would be more problematic than looking at visual stimuli while trying to retrieve auditory information.In other words, we hypothesized that eye-closure would have greater benefits for recall of visual information than for recall of auditory information.

Method
Participants -Fifty-seven undergraduate psychology students from the University of York participated for course credit or a small monetary reward.One participant who had seen the video before was excluded from the analysis, leaving 56 participants.The sample consisted of 10 males and 46 females, with ages ranging from 18 to 26 (M = 19.75years, SD = 1.60).All participants were native English speakers and had normal or corrected-to-normal vision and hearing.
Materials -Participants watched a two-and-a-half-minute video clip taken from a TV drama.A crime scene containing moderate violence, blood, and injuries was selected, depicting a man who breaks into a woman's house and tries to cut her with a knife.Sixteen interview questions were drawn up about the event; half addressing uniquely visual aspects and half addressing uniquely auditory aspects of the event (see Appendix).The questions were asked in the order in which the corresponding information appeared in the video clip; hence the different types of questions were mixed, and in a fixed order throughout.
Procedure -All participants were tested individually in a small laboratory.After providing informed consent, participants watched the video and engaged in a two-minute distracter task involving the backwards spelling of animal names (cf.Perfect et al., 2008).Subsequently, they were interviewed about the video.Twenty-eight participants were assigned to the eyes-open condition and 28 to the eyes-closed condition, using a random sequence generator.Those in the eyes-closed condition were instructed to keep their eyes closed throughout the interview, whereas those in the eyes-open condition received no instructions.If participants in the eyes-closed condition inadvertently opened their eyes (which happened infrequently), they were reminded to keep them closed.
None of the participants in the eyes-open condition spontaneously closed their eyes; all of them were facing the interviewer throughout the interview.Participants were encouraged to ask the interviewer to repeat the question if they did not hear it properly (which happened occasionally in both interview conditions).They were asked to remember as much as possible, but not to guess; a "do not remember" response was allowed.All interviews were audio-taped for subsequent analysis.After completing a demographic information sheet, participants were asked whether they had seen the TV series before, debriefed, and thanked for their participation.
Data Coding -The audio-taped interviews were coded blind to interview condition.Responses were coded as correct, incorrect, or omitted ("don't know"), and all correct responses were coded for grain size (cf.Goldsmith, Koriat, & Pansky, 2005;Goldsmith, Koriat, & Weinberg-Eliezer, 2002;Yaniv & Foster, 1995).Thus, a correct response could be classified as coarse-grain (e.g., "the shirt was grey") or fine-grain (e.g., "the shirt had a grey body with dark-blue sleeves").Examples of each type of response can be found in the Appendix.Incorrect responses were not coded for grain size, due to insufficient data.Ten interviews (160 responses; 18% of the total sample) were randomly selected and coded independently by a second blind coder.Inter-rater reliability (for the decision to score a response as fine-grain correct, coarse-grain correct, incorrect, or omitted) was high, κ = .92,p < .001.The scores of the first coder were retained for the main analysis.

Results and Discussion
Figure 1 shows the number of fine-grain correct, coarse-grain correct, incorrect, and omitted responses about visual and auditory aspects of the witnessed event.It should be noted that any main effects of modality cannot reasonably be attributed to modality effects per se, because the interview questions differed in terms of content, and some were likely more difficult to answer than others (despite attempts to select questions of equivalent difficulty about visual and auditory aspects of the event).Hence, the main focus of the present analyses is on potential main effects of eye-closure, and on potential interactions between eye-closure and modality.
A 2 (Interview Condition: eyes open, eyes closed) x 2 (Question Modality: visual, auditory) mixed ANOVA on fine-grain correct recall i revealed that participants provided significantly more fine-grain correct responses to questions about visual details than to questions about auditory details, F (1, 54) = 12.65, p < .001,η 2 = .17.There was no significant main effect of eye-closure (F < 1), but there was a significant interaction between eye-closure and modality, F (1, 54) = 6.46, p < .05,η 2 = .09. Figure 1 shows that, in line with our predictions, eye-closure tended to increase the number of fine-grain correct responses to questions about visual aspects.At the same time, it tended to decrease the number of fine-grain correct responses to questions about auditory aspects.
However, neither of these simple contrasts was significant (both ps > .08).
A corresponding two-way ANOVA on coarse-grain correct recall revealed that participants also provided significantly more coarse-grain correct responses to questions about visual details than to questions about auditory details, F (1, 54) = 13.68,p < .001,η 2 = .20.Furthermore, participants who closed their eyes provided significantly more coarse-grain correct responses than participants who kept their eyes open, F (1, 54) = 8.28, p < .01,η 2 = .15,d = .77.There was no significant interaction between eye-closure and modality (F < 1).Thus, in terms of coarse-grain recall, eye-closure had a general rather than a modality-specific effect.
Experiment 2: "Ear-Closure" Experiment 1 investigated one part of the modality-specific interference hypothesis, namely, whether eye-closure would improve recall of visual information more than recall of auditory information.Experiment 2 was designed to investigate the auditory counterpart of this hypothesis, namely, whether "ear-closure" would improve recall of auditory information more than recall of visual information.As an auditory equivalent of eye-closure, participants in Experiment 2 were provided with noise-cancelling headphones.This "ear-closure" condition was compared to a condition high in auditory distractions, in which participants were exposed to irrelevant speech in the participants' native language.Due to the absence of an "ears-open" control condition, the experimental design in Experiment 2 was not an exact auditory parallel of the experimental design in Experiment 1.For this reason, the findings will be discussed in terms of the impairment caused by irrelevant speech, rather than in terms of the benefits associated with "ear-closure".
Most previous studies on irrelevant sound have focussed on its impact on short-term recall of simple stimuli.
Although irrelevant speech does not seem to disrupt tasks that rely on phonological processing, such as judgments of rhyme and homophony (Baddeley & Salamé, 1986), it has consistently been found to disrupt tasks that rely on phonological storage, such as recall of visually presented digits (e.g., Colle & Welsh, 1976;Jones, 1993;Jones & Macken, 1995;Salamé & Baddeley, 1982, 1987).However, given that short-term storage relies primarily on a phonological form of coding, whereas long-term storage relies primarily on a semantic form of coding (Baddeley, 1966), we cannot conclude from these findings that irrelevant speech will also disrupt long-term storage.Eye-Closure and Ear-Closure More recent studies have investigated the impact of irrelevant speech and other types of noise on long-term recall of prose passages.Some of these studies have examined the impact of chronic noise exposure (e.g., Banbury & Berry, 2005;Hygge, Evans, & Bullinger, 2002;Matsui, Stansfeld, Haines, & Head, 2004); others have examined the impact of noise during encoding (Enmarker, 2004;Knez & Hygge, 2002); and yet others have examined the impact of noise during both encoding and retrieval (e.g., Banbury & Berry, 1998, Experiment 2;Hygge, Boman, & Enmarker, 2003).Across these different conditions, noise has been found to disrupt long-term memory for prose passages.Although these studies examined long-term recall, they concerned memory for text passages, not memory for events as in the present experiment.Furthermore, none of the studies investigated the impact of noise presented solely during retrieval.Miles, Jones, and Madden (1991) found that short-term recall of digits was not disrupted when irrelevant speech was presented only during retrieval, but as explained above, this type of recall is not comparable to long-term recall of events.
To the authors' knowledge, the only previous study in which participants were exposed to auditory distractions during recall of an event was conducted by Perfect and colleagues (2011).In their study, participants were interviewed face-to-face about a staged event; some in quiet conditions, while others were exposed to bursts of white noise in-between the interview questions.They found that bursts of white noise significantly increased the number of erroneous responses about the event, thus impairing recall accuracy.Given that irrelevant speech typically disrupts performance even more than white noise does (e.g., Salamé & Baddeley, 1987), we expected that irrelevant speech in the current study would also impair recall of the witnessed event.Perfect et al. did not find evidence for a modality-specific impairment caused by bursts of white noise, suggesting that the noise disrupted general concentration rather than the specific retrieval of auditory information.However, when Baddeley and Andrade (2000) exposed their participants to a more cognitively demanding auditory-verbal task (i.e., the instruction to count from 1 to 10, instead of exposure to bursts of white noise), the retrieval of auditory images from long-term memory was disrupted more than the retrieval of visual images.Therefore, it is possible that exposure to irrelevant speech will also impair the retrieval of auditory information more than the retrieval of visual information.
In sum, Experiment 2 was designed to examine whether auditory distractions in the interview environment impair recall performance.Following from Perfect et al.'s (2012) findings that white noise impaired event recall, we hypothesized that irrelevant speech would also impair event recall (perhaps even more so than white noise; cf. Salamé & Baddeley, 1987).Furthermore, in line with Baddeley and Andrade's (2000) findings, we predicted that being exposed to irrelevant speech while trying to retrieve auditory information would be more problematic than being exposed to irrelevant speech while trying to retrieve visual information.Put differently, we hypothesized that "ear-closure" would have greater benefits for recall of auditory information than for recall of visual information.

Method
Participants -Fifty-six undergraduate psychology students from the University of York participated for course credit or a small monetary reward.The sample consisted of 16 males and 40 females, with ages ranging from 18 to 30 (M = 19.87years, SD = 2.41).All participants were native English speakers and had normal or corrected-to-normal vision and hearing.

Materials -
The videotaped event was identical to the video used in Experiment 1.The headphones used were Beyerdynamic DT 770 professional monitoring headphones (250 Ohms), which exclude ambient sounds.The irrelevant-speech stimulus was a fragment of the English-language audio book "The power of now", written and Procedure -The first part of the procedure was identical to Experiment 1.After completing the distracter task, participants were asked to put on the headphones.Participants were randomly assigned to either hear no sound via the headphones (quiet condition) or hear irrelevant speech, which they were instructed to ignore (irrelevant-speech condition).They then wrote their answers on a paper sheet with questions about the video (see Appendix).Participants were instructed to remember as much as possible, but not to guess; a "do not remember" response was permissible.Upon completion of the question sheet, participants removed the headphones and completed a demographic information sheet.At the end of the session, they were asked whether they had seen the TV series before (none of them had), and were thanked and debriefed.The completed answer sheets were coded blind to experimental condition, using a coding procedure identical to Experiment 1 (see Appendix for examples).

Results and Discussion
Figure 2 shows the number of fine-grain correct, coarse-grain correct, incorrect, and omitted responses about visual and auditory aspects of the witnessed event.A 2 (Interview Condition: quiet, irrelevant speech) x 2 (Question Modality: visual, auditory) mixed ANOVA on fine-grain correct recall revealed no significant effects of modality (F < 1) or interview condition, F (1, 54) = 1.61, p = .21,and no interaction between the two (F < 1).Thus, the tendency for irrelevant speech to decrease fine-grain correct recall, shown in Figure 2, was not statistically significant.
A corresponding two-way ANOVA on coarse-grain correct recall revealed that participants provided significantly more coarse-grain correct responses to questions about visual details than to questions about auditory details, F (1, 54) = 7.97, p < .01,η 2 = .12.There was no significant main effect of interview condition (F < 1), but there was a significant interaction between condition and modality, F (1, 54) = 4.16, p < .05,η 2 = .06. Figure 2 shows that irrelevant speech tended to decrease the number of coarse-grain correct responses to questions about auditory aspects; whereas it tended to increase coarse-grain correct answers about visual aspects.However, simple effects analyses showed that neither of these contrasts was significant (both ps > .09).
Another two-way ANOVA on incorrect recall revealed that participants provided significantly more incorrect responses to questions about visual details than to questions about auditory details, F (1, 54) = 4.61, p < .05,η 2 = .08.There was no significant main effect of interview condition (F < 1) and no interaction between interview condition and question modality (F < 1).
Finally, a two-way ANOVA on the number of omissions iii showed that participants responded "don't know" significantly more often to questions about auditory details than to questions about visual details, F (1, 54) = 42.91,p < .001,η 2 = .43.There was no significant main effect of interview condition, F (1, 54) = 1.53, p = .22,and no significant interaction between condition and modality, F (1, 54) = 3.13, p = .08.Thus, the observed tendency for irrelevant speech to increase the number of omissions in response to questions about auditory details (see Figure 2) was not significant.Eye-Closure and Ear-Closure this pattern of findings, because the number of fine-grain correct responses was not fully independent from the number of coarse-grain correct responses.For instance, it is possible that the significant interaction between condition and modality for coarse-grain recall was observed because participants in the irrelevant-speech condition replaced their (more informative) fine-grain correct visual responses with (less informative) coarse-grain alternatives.

General Discussion
We found evidence for an asymmetrical modality-specific interference effect of distractions in the interview environment.Thus, eye-closure had greater benefits for recall of visual information than for recall of auditory information.However, "ear-closure" had no significant effect on overall recall performance, nor specifically on recall of auditory information.In this section, we will first explore potential explanations for the non-significant "ear-closure" findings in Experiment 2. Subsequently, we will consider the implications of the significant eye-closure effect observed in Experiment 1.
Given that the pattern observed in Experiment 2 was in the expected direction, the non-significant findings may simply have been due to a lack of power.However, it is also possible that a more fundamental theoretical issue underlies the non-significant effect of irrelevant speech in the present study.In Baddeley and Andrade's (2000) Experiment 4 and 5, the vividness of auditory images retrieved from long-term memory was significantly disrupted by concurrent counting from 1 to 10.In the present experiment, the retrieval of auditory images from long-term memory (which was supposedly required to answer the interview questions about auditory aspects of the event) was not significantly disrupted by exposure to irrelevant speech.Perhaps, the discrepancy is due to the nature of the auditory-distraction task.It is likely that counting involves different functional components of the phonological loop (e.g., subvocal rehearsal) than hearing irrelevant speech does (cf.Baddeley & Salamé, 1986).The components that are disrupted by counting, but not by irrelevant speech, may be involved in the retrieval of auditory images from long-term memory (see also Baddeley & Logie, 1992).To test this idea, future research could compare the impact of concurrent counting with the impact of irrelevant speech on the retrieval of visual and auditory images from long-term memory.
.From an applied perspective, one significant limitation of Experiment 2 was the elimination of important social aspects of an eyewitness interview.Because the written answer sheet required no social interaction between the experimenter and the participant, the retrieval environment lacked socially-based environmental distractions.It has consistently been found that attending to another person's social cues demands a substantial amount of cognitive resources (Doherty-Sneddon & McAuley, 2000;Doherty-Sneddon, & Phelps, 2005;Glenberg et al., 1998;Markson & Paterson, 2009).Indeed, Wagstaff et al. (2008) found that recall performance in response to complex interview questions about a witnessed criminal event deteriorated as the number of observers in the interview room increased.Thus, it is possible that the irrelevant, non-social auditory distractions in Experiment 2 simply were not severe enough to disrupt recall performance to a significant extent.Perhaps, if the interview in Experiment 2 had involved social interaction, the added auditory distractions would have disrupted recall performance to a significant extent.Support for this idea is provided by previous findings.In Vredeveldt et al.'s (2011) experiment, participants who took part in a face-to-face interview (i.e., including social interaction) while being exposed to irrelevant speech in a foreign language indeed provided significantly fewer fine-grain correct responses about a witnessed event than participants who were not exposed to environmental distractions.Similarly, in Perfect et al.'s (2012) study, bursts of white noise interposed between the interview questions in a face-to-face interview significantly impaired the accuracy of event recall.
Social psychological research has also shown that looking at another person's face is a cognitively demanding task (Beattie, 1981;Ehrlichman, 1981;Kendon, 1967;Markson & Paterson, 2009).Interpreted in this light, the eye-closure benefits observed in Experiment 1 are consistent with Baddeley and Andrade's (2000) finding that visual tasks (in this case, looking at the interviewer's face) disrupt the vividness of visual images retrieved from long-term memory.That is, it seems likely that fine-grain correct recall requires a degree of visualization.For instance, by visualizing the witnessed scene, witnesses would have been able to report that the man was kneeling on the floor by the coffee table (fine-grain correct answer), rather than simply concluding from a gist-based memory that the man was on the floor (coarse-grain correct answer).This interpretation of the present findings is also consistent with previous findings showing that eye-closure facilitates visualization (Caruso & Gino, 2011;Rode, Revol, Rossetti, Boisson, & Bartolomeo, 2007;Wais, Rubens, Boccanfuso, & Gazzaley, 2010).Furthermore, the decrease in incorrect visual responses associated with eye-closure suggests that the increase in correct recall observed as a result of eye-closure was not merely the result of a criterion shift (Koriat & Goldsmith, 1996; see also Perfect et al., 2012).
In addition to the modality-specific benefit for fine-grain correct recall, eye-closure was associated with a modality-general benefit for coarse-grain correct recall.Thus, eye-closure increased the number of correct coarse-grain responses irrespective of the modality of the to-be-remembered information.This finding is compatible with the idea that eye-closure reduces general cognitive load, thereby improving overall concentration (e.g., Glenberg et al., 1998;Perfect et al., 2011;Perfect et al., 2008).Thus, the present findings suggest that both modality-specific and general processes play a role in the eye-closure effect, in line with our previous findings (Vredeveldt et al., 2011).Which type of process is dominant will likely depend on a multitude of factors, including the nature of the recalled event.For instance, it is possible that eye-closure during the interview will be more beneficial for the recall of auditory information when the witnessed event does not contain any visual information.Eye-Closure and Ear-Closure 292 To investigate this possibility, future research could study the eye-closure effect in an "earwitness" setting (cf.Campos & Alonso-Quecuty, 2006;Pezdek & Prull, 1993;Yarmey, 1992).
An investigation of the cognitive mechanisms behind the eye-closure effect is not only interesting from a theoretical point of view, but also relevant from an applied point of view.First of all, it is useful for police interviewers to know what kind of recalled information can be enhanced by instructing witnesses to close their eyes, and what kind of information does not seem to benefit.Moreover, police officers are unlikely to use an interview tool if they are not convinced of its benefits, as exemplified by the fact that certain interview instructions that are perceived to be ineffective (e.g., the reverse-order and change-perspective instructions) are rarely used in practice (e.g., Dando, Wilcock, & Milne, 2009;Kebbell, Milne, & Wagstaff, 1999;Milne & Bull, 2002).Thus, an examination of the underpinnings of the eye-closure effect is important from both a theoretical and a practical perspective.The present findings add to the converging evidence (e.g., Mastroberardino et al., 2012;Perfect et al., 2008;Vredeveldt et al., 2011;Wagstaff et al., 2004) that eye-closure has the potential to become a valuable tool in eyewitness interviewing, particularly to facilitate recall of detailed visual information about the witnessed criminal event.

Figure 1 .
Figure 1.Mean number of fine-grain correct, coarse-grain correct, incorrect, and omitted responses about visual and auditory aspects of the witnessed event in Experiment 1. Error bars indicate standard error.
Contrary to our predictions, irrelevant speech caused neither an overall impairment in recall performance, nor a modality-specific impairment.Unexpectedly, the effects of irrelevant speech on recall of visual aspects varied as a function of grain size: irrelevant speech tended to decrease fine-grain correct recall while increasing coarse-grain correct recall, leaving the total number of correct responses about visual details unaffected.It is difficult to interpret Europe's Journal of Psychology 2012, Vol.8(2), 284-299 doi:10.5964/ejop.v8i2.472

Figure 2 .
Figure 2. Mean number of fine-grain correct, coarse-grain correct, incorrect, and omitted responses about visual and auditory aspects of the witnessed event in Experiment 2. Error bars indicate standard error.