<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article
  PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD with MathML3 v1.2 20190208//EN" "JATS-journalpublishing1-mathml3.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" article-type="research-article" dtd-version="1.2" xml:lang="en">
<front>
<journal-meta><journal-id journal-id-type="publisher-id">EJOP</journal-id><journal-id journal-id-type="nlm-ta">Eur J Psychol</journal-id>
<journal-title-group>
<journal-title>Europe's Journal of Psychology</journal-title><abbrev-journal-title abbrev-type="pubmed">Eur. J. Psychol.</abbrev-journal-title>
</journal-title-group>
<issn pub-type="epub">1841-0413</issn>
<publisher><publisher-name>PsychOpen</publisher-name></publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">ejop.15803</article-id>
<article-id pub-id-type="doi">10.5964/ejop.15803</article-id>
<article-categories>
<subj-group subj-group-type="heading"><subject>Research Reports</subject></subj-group>

<subj-group subj-group-type="badge">
<subject>Data</subject>
<subject>Code</subject>
<subject>Materials</subject>
0</subj-group>

</article-categories>
<title-group>
<article-title>Behavioral and Neuropsychological Correlates of Emotion Regulation via Attentional Deployment: An Expanded Replication</article-title>
<alt-title alt-title-type="right-running">Emotion Regulation via Attentional Deployment</alt-title>
<alt-title specific-use="APA-reference-style" xml:lang="en">Behavioral and neuropsychological correlates of emotion regulation via attentional deployment: An expanded replication</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author"><name name-style="western"><surname>Salas</surname><given-names>Christian</given-names></name><xref ref-type="aff" rid="aff1"><sup>1</sup></xref></contrib>
<contrib contrib-type="author"><name name-style="western"><surname>Núñez</surname><given-names>Nicolas</given-names></name><xref ref-type="aff" rid="aff2"><sup>2</sup></xref></contrib>
<contrib contrib-type="author"><name name-style="western"><surname>Pozo</surname><given-names>Luz María</given-names></name><xref ref-type="aff" rid="aff2"><sup>2</sup></xref></contrib>
<contrib contrib-type="author"><name name-style="western"><surname>Bremer</surname><given-names>Marko</given-names></name><xref ref-type="aff" rid="aff1"><sup>1</sup></xref></contrib>
	<contrib contrib-type="author" corresp="yes"><contrib-id contrib-id-type="orcid" authenticated="false">https://orcid.org/0000-0002-9517-5545</contrib-id><name name-style="western"><surname>Rojas-Líbano</surname><given-names>Daniel</given-names></name><xref ref-type="corresp" rid="cor1">*</xref><xref ref-type="aff" rid="aff1"><sup>1</sup></xref></contrib>
<contrib contrib-type="editor">
<name>
	<surname>Hussain</surname>
	<given-names>Sahir</given-names>
</name>
<xref ref-type="aff" rid="aff3"/>
</contrib>
<aff id="aff1"><label>1</label><institution content-type="dept">Centro de Estudios en Neurociencia Humana y Neuropsicología (CENHN). Facultad de Psicología</institution>, <institution>Universidad Diego Portales</institution>. <addr-line><city>Santiago</city></addr-line>, <country country="CL">Chile</country></aff>
<aff id="aff2"><label>2</label><institution content-type="dept">Programa de Magíster en Neurociencia Social. Facultad de Psicología</institution>, <institution>Universidad Diego Portales</institution>. <addr-line><city>Santiago</city></addr-line>, <country country="CL">Chile</country></aff>
	<aff id="aff3">Lancaster University, Lancaster, <country>United Kingdom</country></aff>
</contrib-group>
<author-notes>
	<corresp id="cor1"><label>*</label>Facultad de Psicología, Universidad Diego Portales, Vergara 275, Santiago, Chile. <email xlink:href="daniel.rojasli@mail.udp.cl">daniel.rojasli@mail.udp.cl</email></corresp>
</author-notes>
<pub-date date-type="pub" publication-format="electronic"><day>29</day><month>08</month><year>2025</year></pub-date>
	<pub-date pub-type="collection" publication-format="electronic"><year>2025</year></pub-date>
<volume>21</volume>
<issue>3</issue>
<fpage>216</fpage>
<lpage>233</lpage>
<history>
<date date-type="received">
<day>09</day>
<month>10</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>12</day>
<month>06</month>
<year>2025</year>
</date>
</history>
<permissions><copyright-year>2025</copyright-year><copyright-holder>Salas, Núñez, Pozo et al.</copyright-holder><license license-type="open-access" specific-use="CC BY 4.0" xlink:href="https://creativecommons.org/licenses/by/4.0/"><ali:license_ref>https://creativecommons.org/licenses/by/4.0/</ali:license_ref><license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution (CC BY) 4.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p></license></permissions>
<abstract>
<p>Attentional deployment (AD) constitutes an emotion regulation (ER) strategy that shifts the attentional focus to modulate the emotional experience. There are very few experimental paradigms that can study AD. One such task studies AD by using emotional images with zones of focus within them, to manipulate visual attention toward arousing or non-arousing portions of the scene. However, this task has only been implemented with participants inside a scanner and has no replications beyond the work of the original research group. In the present study, we replicated and extended a previously introduced AD task, implementing it with a sample of 55 adult participants. Our sample performed the task in a regular laboratory setting, including eye-tracking to monitor instruction following, and in addition, participants completed an attentional test. We replicated the original AD effect in a new population sample, although we found a lower effect size. We conceived and computed an estimate of AD abilities by comparing intensity and valence ratings across attentional conditions. We also analyzed the association between attention measured through the Attention Network Test (ANT) and AD capacities and found no relationship. The task can be used in the laboratory to analyze the AD process. Our replication and expansion of the AD task provide valuable insights into the behavioral and neuropsychological correlates of ER strategies.</p>
</abstract>
<kwd-group kwd-group-type="author"><kwd>emotion regulation</kwd><kwd>attentional deployment</kwd><kwd>Attention Network Test</kwd><kwd>process model of emotion regulation</kwd></kwd-group>

</article-meta>
</front>
<body>
	<sec sec-type="intro" id="intro"><title/>
<p>The interaction between affect and cognition has been a long-standing area of scientific interest and debate (<xref ref-type="bibr" rid="r19">Forgas, 2008</xref>). Most of this debate has focused on how affect confers value to, and modulates, cognitive processes such as perception, memory, or thinking (<xref ref-type="bibr" rid="r8">Bower et al., 1983</xref>; <xref ref-type="bibr" rid="r10">Clore &amp; Schiller, 2016</xref>; <xref ref-type="bibr" rid="r29">Isen, 1984</xref>). Less attention, however, has been given to understanding how cognition influences emotional responses. Emotion regulation (ER) refers to the processes by which individuals influence which emotions they have, when they have them, and how they experience and express these emotions (<xref ref-type="bibr" rid="r22">Gross, 1998</xref>). ER has been defined as a top-down process, largely conscious and dependent on cognitive control (<xref ref-type="bibr" rid="r37">McRae &amp; Gross, 2020</xref>). Several studies have offered neuropsychological data to suggest that the regulation of emotion via the use of ER strategies requires specific cognitive processes, such as inhibition, working memory, and verbal fluency (<xref ref-type="bibr" rid="r45">Salas et al., 2014</xref>, <xref ref-type="bibr" rid="r44">2016</xref>). In consequence, the study of ER and ER strategies can bring some relevant insights into the affect-cognition interaction.</p>
<p>A framework that has structured research on ER is the “process-model”, which proposes five intrinsic strategies often used to regulate emotions (<xref ref-type="bibr" rid="r22">Gross, 1998</xref>): situation selection, situation modification, attentional deployment, cognitive reappraisal, and response modulation. Despite the explosive recent growth in ER research, most efforts have focused on cognitive reappraisal (<xref ref-type="bibr" rid="r37">McRae &amp; Gross, 2020</xref>) with other strategies receiving less attention. Here, we focus on <italic>Attentional Deployment</italic> (AD), which involves the manipulation of the attentional focus during an emotional situation to downregulate negative emotional experience. Even though AD has been described as emerging early during development (<xref ref-type="bibr" rid="r41">Posner et al., 2014</xref>), its behavioral and neuropsychological correlates are still being investigated.</p>
<p>A variety of emotion elicitation procedures with images have been used to study AD in the laboratory (<xref ref-type="bibr" rid="r14">Dunning &amp; Hajcak, 2009</xref>; <xref ref-type="bibr" rid="r17">Ferri et al., 2013</xref>, <xref ref-type="bibr" rid="r18">2016</xref>). In 2009, Dunning and Hajcak devised a task by superimposing or removing circles around specific areas of negatively valenced pictures to manipulate the participants’ focus of attention. They showed this manipulation modulated the amplitude of electroencephalographic markers of emotional processing (<xref ref-type="bibr" rid="r14">Dunning &amp; Hajcak, 2009</xref>). Later, Ferri and colleagues designed an AD task based on similar principles, where participants had to focus their attention within circles placed on emotionally arousing sections or on non-arousing areas of the images. Subsequently, participants were asked to provide emotional ratings, which allowed the assessment of AD by rating comparisons across attentional conditions (<xref ref-type="bibr" rid="r17">Ferri et al., 2013</xref>, <xref ref-type="bibr" rid="r18">2016</xref>). The authors showed interactions between the amygdala and frontoparietal regions during AD, assessed by functional magnetic resonance imaging. These data have offered valuable information regarding AD’s neural basis and have shown that this ER strategy can be studied in the laboratory.</p>
<p>Interestingly, to date, we know of no reports or independent studies replicating the AD task presented originally by <xref ref-type="bibr" rid="r17">Ferri et al. (2013)</xref>. A current challenge in psychology and neurosciences is to replicate and extend its findings to diverse population samples and contexts (<xref ref-type="bibr" rid="r33">Kopal et al., 2023</xref>), to evaluate the generalizability of our models and empirical effects. In this regard, we currently lack data from one of the very few experimental tasks that assesses AD in the laboratory, and therefore we do not know to what extent previous results apply to different population samples. In addition, we still know little about AD’s behavioral and neuropsychological correlates. For example, we would like to know the response times people require to report their emotional experiences. How long does it take for people to make these decisions? Is the response time influenced by the valence and arousal of negative pictures? How does the attentional focus affect decision time? We also would like to know the size of the experimental effect of AD: How different is the emotional response to an unpleasant image under different attentional conditions? This is also related to the need to have a readout variable from the task that would serve as an estimate of the AD capacities.</p>
<p>As for the neuropsychological correlates, it has been assumed that AD requires several basic attentional and executive processes. AD would require the ability to sustain attention (<xref ref-type="bibr" rid="r50">Urry &amp; Gross, 2010</xref>) and the capacity to decouple attention from emotional stimuli (<xref ref-type="bibr" rid="r11">Compton, 2000</xref>). Despite the interest in the literature on the role of cognitive processes in AD, few articles have offered data on this matter, most of them not using AD experimental paradigms. For example, it has been reported that older adults with good executive abilities (measured with the Attention Network Test (ANT), exhibit better resistance to declines in mood (<xref ref-type="bibr" rid="r28">Isaacowitz et al., 2009</xref>). An association has also been described between the activation of the orienting attentional network and the subjective report of emotion regulation (<xref ref-type="bibr" rid="r52">Xiu et al., 2018</xref>). All this evidence suggests a potential role of attentional abilities in AD. Consequently, exploring the relationship between attentional components, or attentional networks, and AD would be extremely informative for the field of ER specifically, and for the study of cognition-affect interactions more generally.</p>
<p>The present article aims to achieve three primary objectives: Firstly, it seeks to replicate the AD experimental task originally introduced by <xref ref-type="bibr" rid="r17">Ferri et al. (2013)</xref>, which, to the best of our knowledge, has not been replicated beyond the original research group or implemented with participants outside a scanner. Secondly, the study aims to behaviorally explore AD ability in detail through the analysis of the response time of emotional ratings. Lastly, the study endeavors to investigate the relationship between attentional abilities, as measured by the ANT, and AD performance.</p></sec>
<sec sec-type="methods"><title>Method</title>
<sec><title>Sample Size</title>
<p>We based our calculation of sample size on the publication that we used as a reference to build the task and that we are replicating here (<xref ref-type="bibr" rid="r17">Ferri et al., 2013</xref>). They used <italic>n</italic> = 41 participants (Study 1). There, we focused on the size of effect that the attentional manipulation had on participants’ emotional ratings. Ferri et al. reported a rating of <italic>M</italic> = 3.22, <italic>SD</italic> = 0.77 for unpleasant images with arousing focus, and a rating of <italic>M</italic> = 2.11, <italic>SD</italic> = 0.75 for unpleasant images with non-arousing focus. This results in an effect size of <italic>g</italic> = 1.4. Using this effect size, we used the software G*Power 3.1 to estimate sample size considering a matched pairs <italic>t</italic>-test, one tail, α = 0.05, and a power of 0.95. This calculation resulted in a sample size of <italic>n</italic> = 8. In a second report that used the task, sample size was <italic>n</italic> = 51 and there was no report of rating data to calculate effect size (<xref ref-type="bibr" rid="r18">Ferri et al., 2016</xref>). Given that it is not a recommended practice to use estimates of effect size from isolated reports, and that published effect size estimates tend to be large and misleading (<xref ref-type="bibr" rid="r9">Button et al., 2013</xref>; <xref ref-type="bibr" rid="r20">Gelman &amp; Carlin, 2014</xref>), we simply set out to have a sample size larger than the original studies (i.e., <italic>n</italic> = 41 and <italic>n</italic> = 51). Thus, we ended data collection at <italic>n</italic> = 55.</p></sec>
<sec sec-type="subjects"><title>Participants</title>
<p>A total of 55 participants completed the task (32 female; Age: <italic>M</italic> = 21.9, <italic>SD</italic> = 4.1 years). Participants were recruited through printed posts and social media. The data collection process occurred in two different periods. The first occurred during 2019 (<italic>n</italic> = 39 participants). The second occurred in 2022 (<italic>n</italic> = 16 participants). Our inclusion criteria were to be older than 18 years old and to have completed secondary education. The exclusion criteria were a refusal to sign the written consent or a diagnosis of a neurological condition. The institutional Ethics Committee reviewed and approved the study. All participants signed a written consent form for participation.</p></sec>
<sec><title>Task and Procedure</title>
<p>All data was collected at the same location, in our laboratory. Participants performed the task individually in a sound-proof, dimly lit experimental room. We used a computer screen View Sonic XG2402, with a spatial resolution of 1920 x 1080 pixels and dimensions of 53.4 cm (width) and 30.1 cm (height) to present the instructions and images throughout the task. After reviewing and signing the informed consent, the entire procedure was explained to the participants. After this explanation, and before commencing the task properly, they were familiarized with the task by executing five exercise trials that used images that were not part of the task.</p>
<p>The AD task adopted the procedures described by Ferri and collaborators (<xref ref-type="bibr" rid="r17">Ferri et al., 2013</xref>). Our main changes to the task were two: first, participants provided emotional ratings using the original Manikin 1 to 9 scale (instead of the 1 to 4 scale used by Ferri et al.) and second, participants were asked to provide two ratings, one of emotional intensity and one of emotional valence (instead of intensity only as in Ferri et al.). See <xref ref-type="fig" rid="f1">Figure 1A</xref>.</p><fig id="f1" position="anchor" fig-type="figure" orientation="portrait"><label>Figure 1</label><caption>
<title>Task Structure, Stimuli and Focus Types, and Trial Sequences</title><p><italic>Note</italic>. A. Task and trial structure. Each trial began with an instructions screen, indicating to the participant to focus on the blue circle or to observe it freely. After instructions, stimuli (i.e., five pictures from IAPS) were presented, which could be neutral or negative. Then, the participant had to rate the stimuli in Valence and Intensity. Then, five grey color screens were presented before the next trial. B. Examples of stimuli and focus types. Stimuli could be unpleasant or neutral, and they could have no focus (‘free’), a focus on a non-arousing area of the image, or on an arousing area. Neutral stimuli did not have arousing areas. C. Valences and Intensities of the stimuli used. Left: distributions of valence ratings of the images, according to IAPS data, parsed by type: unpleasant (red) and neutral (green). Right: corresponding data for intensity ratings. D. Trial sequences. Each participant was randomly assigned to one out of three possible sequences, constructed not to contain successive trials of the same type. Numbers on top are the participants assigned to each sequence.</p></caption><graphic xlink:href="ejop.15803-f1" position="anchor" orientation="portrait"/></fig>
	<p>Thus, depending on focus conditions and stimuli (i.e., image) types, each trial belonged to one out of five possible categories (see <xref ref-type="fig" rid="f1">Figure 1B</xref>). Stimuli could be neutral or unpleasant, and they were combined with three focus conditions. Unpleasant stimuli were paired with one of the following conditions: focus-free (no circle on the image), arousing focus (circle on an arousing part of the image), and non-arousing focus (circle on a non-arousing part of the image). Neutral stimuli, by definition, did not have arousing parts, and therefore, they were paired only with focus-free or non-arousing focus. See <xref ref-type="fig" rid="f1">Figure 1B</xref> for an example. The 100 IAPS images used in this task were identical to the ones employed in previous studies (<xref ref-type="bibr" rid="r17">Ferri et al., 2013</xref>, <xref ref-type="bibr" rid="r18">2016</xref>; <xref ref-type="bibr" rid="r24">Hajcak et al., 2009</xref>). Stimuli were either unpleasant (low valence, high intensity) or neutral (higher valence, lower intensity). Of the 100 stimuli, 60 were unpleasant, with low valence on the 1 to 9 scale (<italic>M</italic> = 2.28, <italic>SD</italic> = 1.47), and 40 were neutral, with higher valence (<italic>M</italic> = 5.14, <italic>SD</italic> = 1.28). Conversely, unpleasant stimuli had a significantly higher level of intensity on the 1 to 9 scale (<italic>M</italic> = 6.05, <italic>SD</italic> = 2.25) compared to neutral stimuli (<italic>M</italic> = 2.91, <italic>SD</italic> = 1.92). See <xref ref-type="fig" rid="f1">Figure 1C</xref> for the distributions of valence and intensity values reported for the set of stimuli.</p><?figure f1?>
	<p>Each trial began by presenting the instructions written on the screen for 4 seconds. Instructions were “focus your attention on the circle” or “observe the image freely”. After instructions, five images of the same type (neutral or unpleasant) were presented for 4 seconds each. Then, a screen containing an image with Numbers 1 to 9 appeared, with the instruction “Rate the valence, from 1 to 9”. Below the instruction, an image depicting the Manikin figures (<xref ref-type="bibr" rid="r34">Lang et al., 2008</xref>) alongside a horizontal scale showing Numbers 1 to 9. Below Number 1 was a text: “Negative”, and below Number 9 was a text: “Positive”. The participant had 5 seconds to complete the rating. They did not have the instruction to answer quickly. Participants delivered their responses through a regular computer keyboard. After delivering the response, the number corresponding to the assigned rating was shown on the screen for 0.35 seconds. If the participant did not complete it, the next screen appeared, containing instructions for Intensity rating. The procedure to record the participant’s intensity rating was similar to the one for valence. After participant ratings, a small fixation circle was presented at the center of the screen. This fixation circle was kept through five homogeneous gray backgrounds: 40%, 10%, 50%, 30%, and 20% of light saturation, which lasted 4 seconds each. After the sequence was completed, a new trial started. The task ended after 20 trials. One out of three possible trial sequences was randomly assigned to each participant to prevent possible confounding order effects (see <xref ref-type="fig" rid="f1">Figure 1D</xref>). See <xref ref-type="table" rid="t1">Table 1</xref> for the details of these sequences. Due to the random assignment procedure, we ended up with different numbers of participants in each sequence.</p>
<table-wrap id="t1" position="anchor" orientation="portrait">
<label>Table 1</label><caption><title>Sequences of Images Used as Stimuli in the Task</title></caption>
	<table frame="hsides" rules="groups" style="compact-2; striped-#f3f3f3">
<col width="" align="left"/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<thead>
<tr>
<th/>
<th colspan="2" scope="colgroup">Sequence 1<hr/></th>
<th colspan="2" scope="colgroup">Sequence 2<hr/></th>
<th colspan="2" scope="colgroup">Sequence 3<hr/></th>
</tr>
<tr>
<th valign="bottom">Trial</th>	
<th scope="colgroup">Image</th>
<th>Stimulus-Focus</th>
<th>Image</th>
<th>Stimulus-Focus</th>
<th>Image</th>
<th>Stimulus-Focus</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>9570</td>
<td>Unpl-NonArous</td>
<td>6555</td>
<td>Unpl-Arous</td>
<td>9252</td>
<td>Unpl-Arous</td>
</tr>
<tr>
<td>1</td>
<td>8480</td>
<td>Unpl-NonArous</td>
<td>3060</td>
<td>Unpl-Arous</td>
<td>6260</td>
<td>Unpl-Arous</td>
</tr>
<tr>
<td>1</td>
<td>3181</td>
<td>Unpl-NonArous</td>
<td>3017</td>
<td>Unpl-Arous</td>
<td>6571</td>
<td>Unpl-Arous</td>
</tr>
<tr>
<td>1</td>
<td>9584</td>
<td>Unpl-NonArous</td>
<td>9040</td>
<td>Unpl-Arous</td>
<td>3016</td>
<td>Unpl-Arous</td>
</tr>
<tr>
<td>1</td>
<td>3015</td>
<td>Unpl-NonArous</td>
<td>8480</td>
<td>Unpl-Arous</td>
<td>6313</td>
<td>Unpl-Arous</td>
</tr>
<tr style="grey-border-top">
<td>2</td>
<td>7100</td>
<td>Neut-NonArous</td>
<td>6312</td>
<td>Unpl-NonArous</td>
<td>7490</td>
<td>Neut-NonArous</td>
</tr>
<tr>
<td>2</td>
<td>5250</td>
<td>Neut-NonArous</td>
<td>9428</td>
<td>Unpl-NonArous</td>
<td>2880</td>
<td>Neut-NonArous</td>
</tr>
<tr>
<td>2</td>
<td>7550</td>
<td>Neut-NonArous</td>
<td>6370</td>
<td>Unpl-NonArous</td>
<td>7550</td>
<td>Neut-NonArous</td>
</tr>
<tr>
<td>2</td>
<td>7950</td>
<td>Neut-NonArous</td>
<td>3212</td>
<td>Unpl-NonArous</td>
<td>7217</td>
<td>Neut-NonArous</td>
</tr>
<tr>
<td>2</td>
<td>7490</td>
<td>Neut-NonArous</td>
<td>3530</td>
<td>Unpl-NonArous</td>
<td>5875</td>
<td>Neut-NonArous</td>
</tr>
	<tr style="grey-border-top">
<td>3</td>
<td>9435</td>
<td>Unpl-Free</td>
<td>9435</td>
<td>Unpl-Free</td>
<td>9265</td>
<td>Unpl-Free</td>
</tr>
<tr>
<td>3</td>
<td>9252</td>
<td>Unpl-Free</td>
<td>9403</td>
<td>Unpl-Free</td>
<td>9300</td>
<td>Unpl-Free</td>
</tr>
<tr>
<td>3</td>
<td>9430</td>
<td>Unpl-Free</td>
<td>6560</td>
<td>Unpl-Free</td>
<td>9430</td>
<td>Unpl-Free</td>
</tr>
<tr>
<td>3</td>
<td>6370</td>
<td>Unpl-Free</td>
<td>9410</td>
<td>Unpl-Free</td>
<td>3053</td>
<td>Unpl-Free</td>
</tr>
<tr>
<td>3</td>
<td>9403</td>
<td>Unpl-Free</td>
<td>6260</td>
<td>Unpl-Free</td>
<td>6560</td>
<td>Unpl-Free</td>
</tr>
	<tr style="grey-border-top">
<td>4</td>
<td>2880</td>
<td>Neut-Free</td>
<td>7004</td>
<td>Neut-Free</td>
<td>2393</td>
<td>Neut-Free</td>
</tr>
<tr>
<td>4</td>
<td>7233</td>
<td>Neut-Free</td>
<td>5875</td>
<td>Neut-Free</td>
<td>2320</td>
<td>Neut-Free</td>
</tr>
<tr>
<td>4</td>
<td>2383</td>
<td>Neut-Free</td>
<td>2270</td>
<td>Neut-Free</td>
<td>7700</td>
<td>Neut-Free</td>
</tr>
<tr>
<td>4</td>
<td>2440</td>
<td>Neut-Free</td>
<td>2102</td>
<td>Neut-Free</td>
<td>2440</td>
<td>Neut-Free</td>
</tr>
<tr>
<td>4</td>
<td>7285</td>
<td>Neut-Free</td>
<td>5250</td>
<td>Neut-Free</td>
<td>5250</td>
<td>Neut-Free</td>
</tr>
	<tr style="grey-border-top">
<td>5</td>
<td>3530</td>
<td>Unpl-Arous</td>
<td>7140</td>
<td>Neut-NonArous</td>
<td>3261</td>
<td>Unpl-NonArous</td>
</tr>
<tr>
<td>5</td>
<td>2717</td>
<td>Unpl-Arous</td>
<td>7705</td>
<td>Neut-NonArous</td>
<td>3530</td>
<td>Unpl-NonArous</td>
</tr>
<tr>
<td>5</td>
<td>2800</td>
<td>Unpl-Arous</td>
<td>7550</td>
<td>Neut-NonArous</td>
<td>8480</td>
<td>Unpl-NonArous</td>
</tr>
<tr>
<td>5</td>
<td>6242</td>
<td>Unpl-Arous</td>
<td align="char" char=".">2745.1</td>
<td>Neut-NonArous</td>
<td align="char" char=".">3005.1</td>
<td>Unpl-NonArous</td>
</tr>
<tr>
<td>5</td>
<td>3060</td>
<td>Unpl-Arous</td>
<td>2320</td>
<td>Neut-NonArous</td>
<td>3010</td>
<td>Unpl-NonArous</td>
</tr>
	<tr style="grey-border-top">
<td>6</td>
<td>7090</td>
<td>Neut-Free</td>
<td>2580</td>
<td>Neut-Free</td>
<td>7175</td>
<td>Neut-NonArous</td>
</tr>
<tr>
<td>6</td>
<td>7025</td>
<td>Neut-Free</td>
<td>7491</td>
<td>Neut-Free</td>
<td>7595</td>
<td>Neut-NonArous</td>
</tr>
<tr>
<td>6</td>
<td>7560</td>
<td>Neut-Free</td>
<td>2440</td>
<td>Neut-Free</td>
<td>2102</td>
<td>Neut-NonArous</td>
</tr>
<tr>
<td>6</td>
<td>5530</td>
<td>Neut-Free</td>
<td>2880</td>
<td>Neut-Free</td>
<td>2235</td>
<td>Neut-NonArous</td>
</tr>
<tr>
<td>6</td>
<td>2270</td>
<td>Neut-Free</td>
<td>7700</td>
<td>Neut-Free</td>
<td>7150</td>
<td>Neut-NonArous</td>
</tr>
	<tr style="grey-border-top">
<td>7</td>
<td>3030</td>
<td>Unpl-Free</td>
<td>5740</td>
<td>Neut-NonArous</td>
<td>3212</td>
<td>Unpl-NonArous</td>
</tr>
<tr>
<td>7</td>
<td>3261</td>
<td>Unpl-Free</td>
<td>7090</td>
<td>Neut-NonArous</td>
<td align="char" char=".">9635.1</td>
<td>Unpl-NonArous</td>
</tr>
<tr>
<td>7</td>
<td>3266</td>
<td>Unpl-Free</td>
<td>7100</td>
<td>Neut-NonArous</td>
<td>3063</td>
<td>Unpl-NonArous</td>
</tr>
<tr>
<td>7</td>
<td>9410</td>
<td>Unpl-Free</td>
<td>7490</td>
<td>Neut-NonArous</td>
<td>9428</td>
<td>Unpl-NonArous</td>
</tr>
<tr>
<td>7</td>
<td>6571</td>
<td>Unpl-Free</td>
<td>2206</td>
<td>Neut-NonArous</td>
<td>3015</td>
<td>Unpl-NonArous</td>
</tr>
	<tr style="grey-border-top">
<td>8</td>
<td>3212</td>
<td>Unpl-NonArous</td>
<td align="char" char=".">6570.1</td>
<td>Unpl-NonArous</td>
<td>3030</td>
<td>Unpl-Free</td>
</tr>
<tr>
<td>8</td>
<td>3016</td>
<td>Unpl-NonArous</td>
<td>3181</td>
<td>Unpl-NonArous</td>
<td>6555</td>
<td>Unpl-Free</td>
</tr>
<tr>
<td>8</td>
<td>9810</td>
<td>Unpl-NonArous</td>
<td>9571</td>
<td>Unpl-NonArous</td>
<td>9571</td>
<td>Unpl-Free</td>
</tr>
<tr>
<td>8</td>
<td align="char" char=".">3005.1</td>
<td>Unpl-NonArous</td>
<td>6550</td>
<td>Unpl-NonArous</td>
<td>3220</td>
<td>Unpl-Free</td>
</tr>
<tr>
<td>8</td>
<td>9433</td>
<td>Unpl-NonArous</td>
<td>9570</td>
<td>Unpl-NonArous</td>
<td>9410</td>
<td>Unpl-Free</td>
</tr>
	<tr style="grey-border-top">
<td>9</td>
<td>2811</td>
<td>Unpl-Arous</td>
<td>9405</td>
<td>Unpl-Arous</td>
<td>7285</td>
<td>Neut-Free</td>
</tr>
<tr>
<td>9</td>
<td>3225</td>
<td>Unpl-Arous</td>
<td>2717</td>
<td>Unpl-Arous</td>
<td>7950</td>
<td>Neut-Free</td>
</tr>
<tr>
<td>9</td>
<td>6260</td>
<td>Unpl-Arous</td>
<td>3213</td>
<td>Unpl-Arous</td>
<td>2190</td>
<td>Neut-Free</td>
</tr>
<tr>
<td>9</td>
<td>3010</td>
<td>Unpl-Arous</td>
<td>9430</td>
<td>Unpl-Arous</td>
<td align="char" char=".">2745.1</td>
<td>Neut-Free</td>
</tr>
<tr>
<td>9</td>
<td>2703</td>
<td>Unpl-Arous</td>
<td>3220</td>
<td>Unpl-Arous</td>
<td>7560</td>
<td>Neut-Free</td>
</tr>
	<tr style="grey-border-top">
<td>10</td>
<td>7175</td>
<td>Neut-NonArous</td>
<td>3015</td>
<td>Unpl-Free</td>
<td>9405</td>
<td>Unpl-Arous</td>
</tr>
<tr>
<td>10</td>
<td>5390</td>
<td>Neut-NonArous</td>
<td>2811</td>
<td>Unpl-Free</td>
<td>2811</td>
<td>Unpl-Arous</td>
</tr>
<tr>
<td>10</td>
<td>7217</td>
<td>Neut-NonArous</td>
<td>9252</td>
<td>Unpl-Free</td>
<td>3213</td>
<td>Unpl-Arous</td>
</tr>
<tr>
<td>10</td>
<td>2580</td>
<td>Neut-NonArous</td>
<td>9400</td>
<td>Unpl-Free</td>
<td>9584</td>
<td>Unpl-Arous</td>
</tr>
<tr>
<td>10</td>
<td>7491</td>
<td>Neut-NonArous</td>
<td>3016</td>
<td>Unpl-Free</td>
<td>6370</td>
<td>Unpl-Arous</td>
</tr>
	<tr style="grey-border-top">
<td>11</td>
<td>2235</td>
<td>Neut-NonArous</td>
<td>9584</td>
<td>Unpl-Arous</td>
<td>2800</td>
<td>Unpl-Free</td>
</tr>
<tr>
<td>11</td>
<td align="char" char=".">2745.1</td>
<td>Neut-NonArous</td>
<td>9300</td>
<td>Unpl-Arous</td>
<td>6242</td>
<td>Unpl-Free</td>
</tr>
<tr>
<td>11</td>
<td>7150</td>
<td>Neut-NonArous</td>
<td>3550</td>
<td>Unpl-Arous</td>
<td>9435</td>
<td>Unpl-Free</td>
</tr>
<tr>
<td>11</td>
<td>2190</td>
<td>Neut-NonArous</td>
<td>9433</td>
<td>Unpl-Arous</td>
<td>3181</td>
<td>Unpl-Free</td>
</tr>
<tr>
<td>11</td>
<td>7020</td>
<td>Neut-NonArous</td>
<td>6022</td>
<td>Unpl-Arous</td>
<td>9253</td>
<td>Unpl-Free</td>
</tr>
	<tr style="grey-border-top">
<td>12</td>
<td>9405</td>
<td>Unpl-Free</td>
<td>7000</td>
<td>Neut-Free</td>
<td>6415</td>
<td>Unpl-NonArous</td>
</tr>
<tr>
<td>12</td>
<td align="char" char=".">6570.1</td>
<td>Unpl-Free</td>
<td>7020</td>
<td>Neut-Free</td>
<td>6022</td>
<td>Unpl-NonArous</td>
</tr>
<tr>
<td>12</td>
<td>6555</td>
<td>Unpl-Free</td>
<td>2383</td>
<td>Neut-Free</td>
<td>3195</td>
<td>Unpl-NonArous</td>
</tr>
<tr>
<td>12</td>
<td>3220</td>
<td>Unpl-Free</td>
<td>5530</td>
<td>Neut-Free</td>
<td>3225</td>
<td>Unpl-NonArous</td>
</tr>
<tr>
<td>12</td>
<td>3017</td>
<td>Unpl-Free</td>
<td>7002</td>
<td>Neut-Free</td>
<td>2730</td>
<td>Unpl-NonArous</td>
</tr>
	<tr style="grey-border-top">
<td>13</td>
<td>7010</td>
<td>Neut-Free</td>
<td>7175</td>
<td>Neut-NonArous</td>
<td>7491</td>
<td>Neut-NonArous</td>
</tr>
<tr>
<td>13</td>
<td>7140</td>
<td>Neut-Free</td>
<td>7560</td>
<td>Neut-NonArous</td>
<td>7000</td>
<td>Neut-NonArous</td>
</tr>
<tr>
<td>13</td>
<td>2980</td>
<td>Neut-Free</td>
<td>7950</td>
<td>Neut-NonArous</td>
<td>2383</td>
<td>Neut-NonArous</td>
</tr>
<tr>
<td>13</td>
<td>2320</td>
<td>Neut-Free</td>
<td>2393</td>
<td>Neut-NonArous</td>
<td>5740</td>
<td>Neut-NonArous</td>
</tr>
<tr>
<td>13</td>
<td>7000</td>
<td>Neut-Free</td>
<td>7150</td>
<td>Neut-NonArous</td>
<td>7020</td>
<td>Neut-NonArous</td>
</tr>
	<tr style="grey-border-top">
<td>14</td>
<td>6831</td>
<td>Unpl-NonArous</td>
<td>6415</td>
<td>Unpl-Free</td>
<td>9403</td>
<td>Unpl-Arous</td>
</tr>
<tr>
<td>14</td>
<td>3195</td>
<td>Unpl-NonArous</td>
<td>2800</td>
<td>Unpl-Free</td>
<td>6831</td>
<td>Unpl-Arous</td>
</tr>
<tr>
<td>14</td>
<td>3550</td>
<td>Unpl-NonArous</td>
<td>6315</td>
<td>Unpl-Free</td>
<td>9810</td>
<td>Unpl-Arous</td>
</tr>
<tr>
<td>14</td>
<td>9040</td>
<td>Unpl-NonArous</td>
<td>3266</td>
<td>Unpl-Free</td>
<td>6312</td>
<td>Unpl-Arous</td>
</tr>
<tr>
<td>14</td>
<td>6560</td>
<td>Unpl-NonArous</td>
<td>9253</td>
<td>Unpl-Free</td>
<td>9040</td>
<td>Unpl-Arous</td>
</tr>
	<tr style="grey-border-top">
<td>15</td>
<td>9400</td>
<td>Unpl-Arous</td>
<td>3261</td>
<td>Unpl-NonArous</td>
<td>5390</td>
<td>Neut-Free</td>
</tr>
<tr>
<td>15</td>
<td>2730</td>
<td>Unpl-Arous</td>
<td>3211</td>
<td>Unpl-NonArous</td>
<td>7705</td>
<td>Neut-Free</td>
</tr>
<tr>
<td>15</td>
<td>9300</td>
<td>Unpl-Arous</td>
<td>3063</td>
<td>Unpl-NonArous</td>
<td>7025</td>
<td>Neut-Free</td>
</tr>
<tr>
<td>15</td>
<td>9420</td>
<td>Unpl-Arous</td>
<td>6190</td>
<td>Unpl-NonArous</td>
<td>7140</td>
<td>Neut-Free</td>
</tr>
<tr>
<td>15</td>
<td>9428</td>
<td>Unpl-Arous</td>
<td>3010</td>
<td>Unpl-NonArous</td>
<td>2206</td>
<td>Neut-Free</td>
</tr>
	<tr style="grey-border-top">
<td>16</td>
<td>6550</td>
<td>Unpl-NonArous</td>
<td align="char" char=".">9635.1</td>
<td>Unpl-Free</td>
<td>9400</td>
<td>Unpl-Free</td>
</tr>
<tr>
<td>16</td>
<td>3213</td>
<td>Unpl-NonArous</td>
<td>2730</td>
<td>Unpl-Free</td>
<td>6550</td>
<td>Unpl-Free</td>
</tr>
<tr>
<td>16</td>
<td align="char" char=".">9635.1</td>
<td>Unpl-NonArous</td>
<td>9810</td>
<td>Unpl-Free</td>
<td>9433</td>
<td>Unpl-Free</td>
</tr>
<tr>
<td>16</td>
<td>6022</td>
<td>Unpl-NonArous</td>
<td>6831</td>
<td>Unpl-Free</td>
<td>6315</td>
<td>Unpl-Free</td>
</tr>
<tr>
<td>16</td>
<td>9571</td>
<td>Unpl-NonArous</td>
<td>6242</td>
<td>Unpl-Free</td>
<td>2703</td>
<td>Unpl-Free</td>
</tr>
	<tr style="grey-border-top">
<td>17</td>
<td>7004</td>
<td>Neut-NonArous</td>
<td align="char" char=".">3005.1</td>
<td>Unpl-Arous</td>
<td>7100</td>
<td>Neut-Free</td>
</tr>
<tr>
<td>17</td>
<td>5875</td>
<td>Neut-NonArous</td>
<td>3053</td>
<td>Unpl-Arous</td>
<td>7002</td>
<td>Neut-Free</td>
</tr>
<tr>
<td>17</td>
<td>2102</td>
<td>Neut-NonArous</td>
<td>3030</td>
<td>Unpl-Arous</td>
<td>7233</td>
<td>Neut-Free</td>
</tr>
<tr>
<td>17</td>
<td>7705</td>
<td>Neut-NonArous</td>
<td>2703</td>
<td>Unpl-Arous</td>
<td>2580</td>
<td>Neut-Free</td>
</tr>
<tr>
<td>17</td>
<td>7595</td>
<td>Neut-NonArous</td>
<td>6313</td>
<td>Unpl-Arous</td>
<td>7090</td>
<td>Neut-Free</td>
</tr>
	<tr style="grey-border-top">
<td>18</td>
<td>3211</td>
<td>Unpl-Free</td>
<td>7233</td>
<td>Neut-NonArous</td>
<td>2717</td>
<td>Unpl-NonArous</td>
</tr>
<tr>
<td>18</td>
<td>6190</td>
<td>Unpl-Free</td>
<td>7010</td>
<td>Neut-NonArous</td>
<td>3266</td>
<td>Unpl-NonArous</td>
</tr>
<tr>
<td>18</td>
<td>9265</td>
<td>Unpl-Free</td>
<td>7025</td>
<td>Neut-NonArous</td>
<td>6190</td>
<td>Unpl-NonArous</td>
</tr>
<tr>
<td>18</td>
<td>6415</td>
<td>Unpl-Free</td>
<td>7285</td>
<td>Neut-NonArous</td>
<td>9420</td>
<td>Unpl-NonArous</td>
</tr>
<tr>
<td>18</td>
<td>9253</td>
<td>Unpl-Free</td>
<td>2980</td>
<td>Neut-NonArous</td>
<td>3550</td>
<td>Unpl-NonArous</td>
</tr>
	<tr style="grey-border-top">
<td>19</td>
<td>6313</td>
<td>Unpl-Arous</td>
<td>7595</td>
<td>Neut-Free</td>
<td>5530</td>
<td>Neut-NonArous</td>
</tr>
<tr>
<td>19</td>
<td>3053</td>
<td>Unpl-Arous</td>
<td>5390</td>
<td>Neut-Free</td>
<td>2980</td>
<td>Neut-NonArous</td>
</tr>
<tr>
<td>19</td>
<td>6315</td>
<td>Unpl-Arous</td>
<td>7217</td>
<td>Neut-Free</td>
<td>7010</td>
<td>Neut-NonArous</td>
</tr>
<tr>
<td>19</td>
<td>3063</td>
<td>Unpl-Arous</td>
<td>2235</td>
<td>Neut-Free</td>
<td>2270</td>
<td>Neut-NonArous</td>
</tr>
<tr>
<td>19</td>
<td>6312</td>
<td>Unpl-Arous</td>
<td>2190</td>
<td>Neut-Free</td>
<td>7004</td>
<td>Neut-NonArous</td>
</tr>
	<tr style="grey-border-top">
<td>20</td>
<td>5740</td>
<td>Neut-Free</td>
<td>9420</td>
<td>Unpl-NonArous</td>
<td>3060</td>
<td>Unpl-Arous</td>
</tr>
<tr>
<td>20</td>
<td>7700</td>
<td>Neut-Free</td>
<td>6571</td>
<td>Unpl-NonArous</td>
<td>3211</td>
<td>Unpl-Arous</td>
</tr>
<tr>
<td>20</td>
<td>7002</td>
<td>Neut-Free</td>
<td>3195</td>
<td>Unpl-NonArous</td>
<td>3017</td>
<td>Unpl-Arous</td>
</tr>
<tr>
<td>20</td>
<td>2393</td>
<td>Neut-Free</td>
<td>3225</td>
<td>Unpl-NonArous</td>
<td align="char" char=".">6570.1</td>
<td>Unpl-Arous</td>
</tr>
<tr>
<td>20</td>
<td>2206</td>
<td>Neut-Free</td>
<td>9265</td>
<td>Unpl-NonArous</td>
<td>9570</td>
<td>Unpl-Arous</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Note.</italic> <xref ref-type="table" rid="t1">Table 1</xref>. Possible Trial Sequences. Each participant was randomly assigned to one out of three possible trial sequences, constructed not to contain successive trials of the same type. The combination of the image and focus types determined the trial type. Sequence 1 was assigned to 19 participants, Sequence 2 to 23 participants, and Sequence 3 to 13 participants. For each sequence, the table shows the trial number (each trial consists of the presentation of 5 images), the IAPS image name, and the corresponding Stimulus and Focus type. Neut: Neutral image; Unpl: Unpleasant image; Arous: Arousing focus; NonArous: Non-Arousing focus; Free: focus-free (no focus).</p>
</table-wrap-foot>
</table-wrap>
<p>All hardware controls and data acquisition (behavioral and physiological) routines were written in Matlab (Version: 9.6.0.1472908 (R2019a) Update 9), using the Psychophysics Toolbox extension (<xref ref-type="bibr" rid="r7">Brainard, 1997</xref>; <xref ref-type="bibr" rid="r30">Kleiner et al., 2007</xref>).</p></sec>
<sec><title>Eye Tracking Recording and Analysis</title>
<p>Eye movement data, used to monitor instruction-following behavior, was acquired using an Eyelink 1000 (SR Research Ltd., Mississauga, Ontario, Canada) with a 500 Hz sampling frequency. Throughout the task, participants sat in front of the computer screen and the eye tracker device and kept their heads in a forehead/chin rest (SR Research Ltd.) placed 56 cm from the screen. Gaze data files were converted to the ASCII format using Eyelink’s EDFConverter, and then analyzed using custom routines written in Matlab (Version: 9.6.0.1472908 (R2019a) Update 9). We discarded gaze data from trials in which gaze detection was lost due to excessive blinks or other software/recording errors (for example). This resulted in completely discarding 12 participants from the statistical descriptions and comparisons.</p>
<p>Gaze behavior was quantified as dwelling time: the percentage of total time spent by the gaze within the focus circle during the image presentation (4 seconds). The raw data contained the (X, Y) screen coordinates where the gaze was located on each sampled data point. We classified each sample by marking whether it was inside or outside the focus circle. This allowed us to count the number of samples inside the circle, divide it by the total number (4 s x 500 samples/s = 2000 samples), and express it as a percentage to obtain the dwelling time. We calculated this dwelling time for each image and the mean of the five images of a trial. This gave us a single value of dwelling time per trial. The overall dwelling time was obtained by averaging dwell times across trials.</p></sec>
<sec><title>ANT</title>
<p>To estimate the participants’ attentional abilities, we used a computerized version of the ANT, which measures the capacity for alertness, orientation, and attentional executive control (<xref ref-type="bibr" rid="r16">Fan et al., 2002</xref>). The efficiency of the alerting network was examined through differences in reaction times in response to a warning signal before the presentation of stimuli. The efficiency of the orienting network was examined through differences in reaction times in response to a cue indicating where a stimulus will appear. The efficiency of the executive control network was examined through differences in reaction times in response to the presentation of a central arrow surrounded by congruent or incongruent indicators (arrows in the same or different directions to the central arrow). The differences in the reaction times associated with using each attentional system were used as a score to evaluate the performance of each attentional network (Alertness, Orientation, and Executive Control Score). For all these measurements, we followed the standard previously published procedures (<xref ref-type="bibr" rid="r16">Fan et al., 2002</xref>).</p>
<p>Given that participants completed the ANT after the experimental task, we had two periods of data collection (See Method section, Participants subsection). However, due to technical problems in retrieving the ANT data from the software, we were able to use only the data from the first period. And in this period (<italic>n</italic> = 39), data was lost for 4 participants. Therefore, we have ANT data only for 35 participants.</p></sec>
<sec><title>Data Analysis</title>
<p>In general, data are summarized by means (<italic>M</italic>), medians (Mdn), standard deviations (<italic>SD</italic>), and inter-quantile range (IQR). For two-group comparisons, we implemented a non-parametrical permutation test based on the <italic>t</italic>-statistic. This allowed us to deal with heterogeneity in our data, which were not always normally distributed. We based our tests on previously published protocols (<xref ref-type="bibr" rid="r15">Ernst, 2004</xref>; <xref ref-type="bibr" rid="r36">Maris &amp; Oostenveld, 2007</xref>). First, we computed the regular two-sample <italic>t</italic>-test and saved the corresponding <italic>t</italic>-value (i.e., the <italic>t</italic>-statistic value). We then randomly permuted (reorganized) the values from both groups, forming two new groups, and implemented the <italic>t</italic>-test, calculating the new <italic>t</italic>-value and saving it. We repeated this permutation procedure 1500 times and constructed a distribution of the permutation-based <italic>t</italic>-values. We then computed the <italic>p</italic>-value as the proportion of <italic>t</italic>-values in the distribution equal to or larger than the non-permuted t-value. Therefore, we report the non-permuted <italic>t</italic>-value, the corresponding degrees of freedom, and the <italic>p</italic>-value. For each two-group comparison, we also present the Hedges’ <italic>g</italic> statistic as a measure of effect size (<xref ref-type="bibr" rid="r25">Hedges, 1981</xref>), which calculates effect size as a difference in means standardized by the pooled standard deviation. Hedges’ <italic>g</italic> value and associated confidence intervals were calculated using a previously available Matlab toolbox (<xref ref-type="bibr" rid="r26">Hentschke &amp; Stüttgen, 2011</xref>).</p></sec>
<sec><title>Transparency and Openness</title>
	<p>The Matlab code used for controlling hardware and implementing the task, and all the code used to analyze the data, produce plots, and compute statistics are available at <xref ref-type="bibr" rid="r47">Salas et al. (2025a)</xref> and <xref ref-type="bibr" rid="r47.5">Salas et al. (2025b)</xref>. All the data collected and used to compute the results presented in this article are available at <xref ref-type="bibr" rid="r43">Rojas Libano (2025)</xref>. All the image files used as stimuli for the task trials are available at <xref ref-type="bibr" rid="r47">Salas et al. (2025a)</xref>. We report in the methods section how we determined our sample size, all data exclusions, manipulations, and all study measures. We also report effect sizes and confidence intervals. This study was not preregistered.</p></sec></sec>
<sec sec-type="results"><title>Results</title>
	<p>Participants completed the task in around 20 minutes (<italic>M</italic> = 23.7, <italic>SD</italic> = 4.6, <italic>n</italic> = 55) (See <xref ref-type="fig" rid="f2">Figure 2A</xref>). In general, participants did not omit response ratings (from a total of 2200 possible ratings (20 trials x 2 ratings/trial = 40 ratings per participant), we collected 2169 (98.6%), with 16 participants omitting 1 rating, 2 participants omitting 2 ratings, 2 participants omitting 3 ratings, and 1 participant omitting 5 ratings). The response time (time between the presentation of the rating screen and the completion of the rating response) for individual ratings of valence and intensity was around 2 seconds (<italic>Mdn</italic> = 2.44 s, <italic>M</italic> = 2.46 s, <italic>SD</italic> = 1.0 s, <italic>n</italic> = 2169 ratings), even when participants had 5 seconds to deliver the response (see <xref ref-type="fig" rid="f2">Figure 2A</xref>). Therefore, these responses, associated with the AD process, were relatively quick.</p><fig id="f2" position="anchor" fig-type="figure" orientation="portrait"><label>Figure 2</label><caption>
<title>Task Duration, Response Times, and Ratings for Each Image Type</title><p><italic>Note.</italic> A. Left: Distribution of the duration of the experimental session for the 55 participants. Right: Distribution of response times, in seconds, for all ratings. 55 participants x 20 trial/participant x 2 ratings/trial = 2200 ratings. The orange curve read in the right y-axis corresponds to the empirical cumulative distribution of these response time values. B. Mean Intensity (left) and Valence (right) ratings for each participant, sorted by image types. Each marker corresponds to the mean of all trials of a given stimulus type for the entire task. Grey lines link markers for the same participant.</p></caption><graphic xlink:href="ejop.15803-f2" position="anchor" orientation="portrait"/></fig>
	<p>The responses evoked by the unpleasant images were markedly different from neutral ones, showing that the images elicited emotions in participants (<xref ref-type="fig" rid="f2">Figure 2B</xref>). On the 1 to 9 scale used, emotional intensity ratings were significantly higher for unpleasant than for neutral images (unpleasant: <italic>Mdn</italic> = 5.75, <italic>M</italic> = 5.99, <italic>SD</italic> = 1.56; neutral: <italic>Mdn</italic> = 2.25, <italic>M</italic> = 2.55, <italic>SD</italic> = 1.3), with a large effect size, <italic>t</italic>(54) = 13.57, <italic>p</italic> &lt; .01, <italic>g</italic> = 2.38, 95% <italic>CI</italic> [1.96, 2.96]. Correspondingly, emotional valence ratings on the 1 to 9 scale were lower for unpleasant than neutral images (unpleasant: <italic>Mdn</italic> = 2.42, <italic>M</italic> = 2.5, <italic>SD</italic> = 1; neutral: <italic>Mdn</italic> = 5.38, <italic>M</italic> = 5.4, <italic>SD</italic> = 0.9), with a large effect size, <italic>t</italic>(54) = -13.52, <italic>p</italic> &lt; .001, <italic>g</italic> = -2.97 [-4.52, -2.04]. These results show that the task worked as expected, eliciting in participants emotional responses that were detectable through their subjective reports. These effects were the same for all three trial sequences, showing that they depended on the image types and not the specific sequence of images.</p>
<sec><title>Gaze Behavior</title>
<p>We monitored participants’ gaze through eye tracking to observe their instruction-following behavior while viewing the task images. Participants generally followed the instructions given (for some examples see <xref ref-type="fig" rid="f3">Figure 3A and 3B</xref>). The gaze behavior was quantified as the dwelling time of the gaze within the blue circle presented in the images and expressed as a percentage of the total time (4 s) of image presentation. The overall dwelling time was high (<italic>Mdn</italic> = 81.97%, <italic>M</italic> = 69.76%, <italic>SD</italic> = 30.6%, <italic>n</italic> = 43), meaning participants tended to keep their gaze within the circle when required (<xref ref-type="fig" rid="f3">Figure 3C</xref>, left-side plot). When we computed the difference in dwelling time between image types (neutral vs. unpleasant), we found that participants spent slightly less time within the circle for unpleasant than neutral images (Neutral: <italic>Mdn</italic> = 86.3%, <italic>M</italic> = 71.94%, <italic>SD</italic> = 30.95%; Unpleasant: Mdn = 81.87%, <italic>M</italic> = 70.31%, <italic>SD</italic> = 28.96%), with <italic>t</italic>(41)= 4.39, <italic>p</italic> &lt; .01 (see <xref ref-type="fig" rid="f3">Figure 3C</xref>, right-side plot). When we considered different experimental conditions (arousing focus vs. non-arousing focus), differences in dwelling time were smaller (Arousing: <italic>Mdn</italic> = 85.52%, <italic>M</italic> = 76.97%, <italic>SD</italic> = 23.43%; Non-arousing: <italic>Mdn</italic> = 82.43%, <italic>M</italic> = 76.92%, <italic>SD</italic> = 21.19%). In summary, these results show that participants engaged in the task and followed the instructions related to the visual attentional focus, irrespective of image or focus type.</p><fig id="f3" position="anchor" fig-type="figure" orientation="portrait"><label>Figure 3</label><caption>
<title>Gaze Data During the Task</title><p><italic>Note</italic>. A. Example of gaze trajectory during a single trial. In this trial. images were of a negative type, and the focus was on a non-arousing area of the image. The blue and yellow dots mark the gaze position at the start and end of the image's 4-second period. The yellow line corresponds to the gaze trajectory. Since the images were presented in the same sequence shown, the gaze end position of an image corresponds to the start position in the next image. B. Heat maps of pixels visited by participants’ gaze. Data corresponds to all trials where the image was presented, parsed by the three attentional conditions: focus-free (left), focus on a non-arousing area of the image (center), and focus on an arousing part (right). Spatial data from all participants was summed to obtain each heat map. C. The left plot shows the dwell gaze times in the focus circle as a percentage of the time the image was shown. Dots mark the mean, and error bars represent the standard deviation, where each marker corresponds to a participant. Some participants were excluded because of missing data. The right plot shows the dwell time difference between image types (Neutral minus Unpleasant) and between attentional conditions (Arousing minus Non-Arousing). Each dot represents one participant. The black bars correspond to the mean. * indicates a significant difference compared to a distribution of mean = 0.</p></caption><graphic xlink:href="ejop.15803-f3" position="anchor" orientation="portrait"/></fig></sec>
<sec><title>Attentional Deployment</title>
<p>Participants had 5 seconds to provide their responses of emotional ratings. Within this temporal frame, they were faster to produce emotional ratings in the arousing than in the non-arousing focus conditions. This difference was presented for intensity (non-arousing <italic>Mdn</italic> = 2.43 s; arousing <italic>Mdn</italic> = 2.2 s), <italic>t</italic>(54) = -1.8, <italic>p</italic> = .03, <italic>g</italic> = -0.22 [-0.6, 0.16], with 60% of the participants showing faster responses for the arousing focus. The same was observed for valence ratings (non-arousing: <italic>Mdn</italic> = 2.93 s; arousing: <italic>Mdn</italic> = 2.66 s), <italic>t</italic>(54) = -3.8, <italic>p</italic> &lt; .01, <italic>g</italic> = -0.44, [-0.84, -0.09], where 67% of the participants displayed faster responses for the arousing focus (see <xref ref-type="fig" rid="f4">Figure 4A</xref>).</p><fig id="f4" position="anchor" fig-type="figure" orientation="portrait"><label>Figure 4</label><caption>
		<title>Intensity and Valence Ratings and AD Estimates</title><p><italic>Note.</italic> A. Response times for intensity (left) and valence (right) ratings for arousing and non-arousing focus. Black horizontal bars are the group means, and gray circles represent the means of individual participants across the task. Lines represent increases (purple) and decreases (orange) in response times. Pie charts represent the corresponding percentage of participants . B. Intensity (left) and valence (right) ratings for unpleasant images, sorted by focus type. Colors and insets as in A. C. AD estimates. Left: The mean rating for a given focus type was subtracted from the mean rating of the other focus type, separately for intensity and valence. Each circle represents one participant. Boxplots represent the distributions. Right: Same data from the left plot, with rating differences in valence and intensity plotted against each other.</p></caption><graphic xlink:href="ejop.15803-f4" position="anchor" orientation="portrait"/></fig>
<p>To further characterize the behavioral responses, we then assessed for associations, computing Spearman’s rank correlation coefficient between the ratings’ response times and the value of the ratings, parsed by attentional focus condition. For the relation between intensity response times and intensity ratings, we obtained <italic>rho</italic> = -0.24, <italic>p</italic> = .07 (Non-Arousing focus), and <italic>rho</italic> = -0.51, <italic>p</italic> &lt; .01 (Arousing focus). For the relation between valence response times and valence ratings, we obtained <italic>rho</italic> = 0.19, <italic>p</italic> = .16 (Non-Arousing focus), and <italic>rho</italic> = 0.32, <italic>p</italic> = .02 (Arousing focus). Thus, specifically for the arousing focus condition, we report a small albeit detectable relationship. Under this condition, the larger the intensity rating, the shorter the response time, and conversely, the larger the valence, the larger the response time.</p>
<p>Emotional ratings were expressed on a 1 to 9 scale. For emotional intensity, we found that the non-arousing focus condition elicited lower ratings than the arousing focus, evidencing a regulation of the emotional experience (non-arousing: <italic>M</italic> = 5.5, <italic>SD</italic> = 1.67; arousing: <italic>M</italic> = 6.5; <italic>SD</italic> = 1.99), with <italic>t</italic>(54) = 4.37, <italic>p</italic> &lt; .01, <italic>g</italic> = 0.43 [0.06, 0.86]. Conversely, we found that for emotional valence, the non-arousing focus elicited higher ratings than the arousing focus, again evidencing regulation (non-arousing: <italic>M</italic> = 2.82, <italic>SD</italic> = 1.16; arousing: <italic>M</italic> = 2.34, <italic>SD</italic> = 1.23), with <italic>t</italic>(54) = -3.89, <italic>p</italic> &lt; .01, <italic>g</italic> = -0.4 [-0.85, -0.04]. These results are shown in <xref ref-type="fig" rid="f4">Figure 4B</xref>.</p>
<p>We then computed our AD estimates, constituted by the differences in rating between the non-arousing and arousing focus conditions. The rating difference was calculated as the participant’s mean rating for the non-arousing focus minus the mean rating for the arousing focus. Given that we had ratings for emotional intensity and valence, we computed two estimates, one for each emotional dimension. For emotional intensity, we found a distribution that was centered on negative values, <italic>Mdn</italic> (<italic>IQR</italic>) = -1(1.5), reflecting a decrease in emotional intensity in the presence of a non-arousing focus. The opposite was true for emotional valence, with a distribution of differences centered in positive values, <italic>Mdn</italic> (<italic>IQR</italic>) = 0.5(1), reflecting an increase in valence for the non-arousing focus.</p>
<p>Finally, we estimated the relationship between these two estimates to assess their congruence using the Pearson product-moment correlation coefficient. We found a correlation between intensity and valence rating differences, <italic>r</italic><sup>2</sup> = 0.35; <italic>p</italic> &lt; .01. This result implied that, on average, the effects of changing attentional focus (i.e., attentional deployment) were consistent across the two emotional dimensions: the larger the effect in intensity, the larger in valence. Therefore, our task produced valence-based and intensity-based estimates of AD.</p>
<p>We also studied the ratings and rating differences for the images in the focus-free condition. In these cases, participants did not have an instructed attentional focus on the image and were free to explore it visually. Regarding response times, we found no differences between non-arousing focus and focus-free conditions, both for intensity (focus-free <italic>Mdn</italic> = 2.4 s; arousing <italic>Mdn</italic> = 2.2 s), with <italic>t</italic>(54) = 1.42, <italic>p</italic> = .93, <italic>g</italic> = 0.13 [-0.24, 0.53], and valence (focus-free <italic>Mdn</italic> = 2.68 s; arousing <italic>Mdn</italic> = 2.66 s), with <italic>t</italic>(54) = 0.44, <italic>p</italic> = .67, <italic>g</italic> = 0.04 [-0.34, 0.42]. When examining ratings, we found very small differences between focus-free and arousing focus, for intensity (focus-free: <italic>M</italic> = 6.5, <italic>SD</italic> = 1.76; arousing: <italic>M</italic> = 6.5, <italic>SD</italic> = 1.99), <italic>t</italic>(54) = 1.69, <italic>p</italic> = .05, <italic>g</italic> = 0.12 [-0.26, 0.46], and also for valence (focus-free: <italic>M</italic> = 2.1, <italic>SD</italic> = 1.08; arousing: <italic>M</italic> = 2.34, <italic>SD</italic> = 1.23), <italic>t</italic>(54) = -2.72, <italic>p</italic> = .003, <italic>g</italic> = -0.2 [-0.57, 0.15]. These slim differences meant the focus-free attentional condition was similar to the arousing focus one regarding the emotional response it elicited.</p></sec>
<sec><title>ANT</title>
<p>The ANT allowed us to measure performance in three types of attention or attentional systems (<xref ref-type="bibr" rid="r16">Fan et al., 2002</xref>). Data from this sample showed a linear increase in the amount of time used by participants, reflecting an increase in the cognitive load of each attentional sub-task; from alerting to orientation to executive: Alerting Network (<italic>Mdn</italic> = 33 ms, <italic>M</italic> = 35.46 ms, <italic>SD</italic> = 22.88 ms), Orientation Network (<italic>Mdn</italic> = 46, <italic>M</italic> = 49.43 ms, <italic>SD</italic> = 20.1 ms), Executive Network (<italic>Mdn</italic> = 111 ms, <italic>M</italic> = 116.16 ms, <italic>SD</italic> = 30.82 ms). When error rates were analyzed, a sharp increase in the percentage of errors in the Executive Network task compared to the Alerting Network and Orienting Network was found: Alerting Network (<italic>Mdn</italic> = 0%, <italic>M</italic> = 0.66%, <italic>SD</italic> = 1.21%), Orientation Network (Mdn = 0%, <italic>M</italic> = 0.26%, <italic>SD</italic> = 0.78%), Executive Network (<italic>Mdn</italic> = 4%, <italic>M</italic> = 4.86%, <italic>SD</italic> = 5.27%). A key question of this study was the relationship between attentional ability -measured by performance on the ANT- and AD ability (AD estimate). Contrary to our hypotheses, no relationship was found between AD ability and performance on attentional tasks (See <xref ref-type="fig" rid="f5">Figure 5B</xref>). We examined the Pearson product-moment correlation coefficient between the intensity-based AD estimate and the Alert Network, <italic>r</italic> = 0.31, <italic>p</italic> = .06, Orientation Network, <italic>r</italic> = 0.11, <italic>p</italic> = .53, and Executive Network, <italic>r</italic> = 0.14, <italic>p</italic> = .43, subdomains of the test. The same was true for the valence-based AD estimate and the Alert Network, <italic>r</italic> = -0.33, <italic>p</italic> = 0.05, Orientation Network, <italic>r</italic> = -0.16, <italic>p</italic> = .34, and Executive Network, <italic>r</italic> = -0.27, <italic>p</italic> = .11, although in this case all correlation values were negative. These results suggest that attentional capacities, as assessed by ANT, do not seem to explain the observed AD ability in our sample of participants.</p><fig id="f5" position="anchor" fig-type="figure" orientation="portrait"><label>Figure 5</label><caption>
<title>Relationship Between ANT Scores and AD Estimates</title><p><italic>Note.</italic> Left: Intensity rating difference as a function of ANT scores, parsed by the attentional networks assessed by the ANT. Right: Valence rating difference as a function of ANT scores, parsed by the attentional networks.</p></caption><graphic xlink:href="ejop.15803-f5" position="anchor" orientation="portrait"/></fig>
<p>In summary, we replicated a previously published AD task (<xref ref-type="bibr" rid="r17">Ferri et al., 2013</xref>, <xref ref-type="bibr" rid="r18">2016</xref>), extending the sample to a different country, and studied in more detail some of its behavioral and neuropsychological correlates. This is a simple and brief experimental task that requires no more than 20 minutes to complete. In terms of emotional elicitation, our data showed that the task is effective, as emotional intensity ratings increase for unpleasant images compared to neutral ones, and emotional valence ratings are lower for unpleasant images (see <xref ref-type="fig" rid="f2">Figure 2</xref>). In terms of behavioral correlates of the task, we observed for unpleasant images that rating response times were shorter (quicker) for ratings involving images with an arousing, as compared to non-arousing, attentional focus. We also found that, specifically for the arousing focus conditions, intensity ratings were inversely related to response times, and conversely, valence ratings were positively associated with response times. In addition, data from eye tracking showed that participants generally followed instructions, with no significant difference in gaze dwelling time between image types or focus conditions. Regarding estimates of AD, we operationalized it as a rating difference between attentional conditions (Non-Arousing and Arousing focus). We observed that the manipulation of attentional focus generated a decrease in emotional intensity and an increase in emotional valence, consistent with previous reports. As for the neuropsychological correlates of AD, no significant relationships were found between AD ability and performance on any of the ANT subtasks.</p></sec></sec>
<sec sec-type="discussion"><title>Discussion</title>
<p>This study aimed to replicate a previously published experimental task devised to estimate AD capacities, analyze emotion regulation (AD) behavioral performance in detail, and study the relation between AD and attentional capacities measured through a neuropsychological test. Our results provide several insights into the AD process specifically and the emotion-cognition interaction more generally.</p>
<p>Our task extends the original sample of participants, consistent with English speakers from the USA. In contrast, our study used a sample of Spanish speakers from the global south. This contributes to addressing the challenge of sample diversity, ensuring that research encompasses a wide range of populations for a more comprehensive understanding (<xref ref-type="bibr" rid="r33">Kopal et al., 2023</xref>). The replication implemented in our study extends reported results to previously unexplored participant samples, thus contributing to bridging these gaps. However, we sought not only to replicate the task but also to extend and further characterize its findings. In this regard, we successfully replicated the task's main effect, which is the change in intensity rating when switching the attentional focus from arousing to non-arousing portions of unpleasant images. In this case, our effect size was <italic>g</italic> = 0.43 [0.06, 0.86], smaller than the one reported previously by <xref ref-type="bibr" rid="r17">Ferri et al. (2013)</xref>, which was <italic>g</italic> = 1.4.</p>
<p>Several differences between our study and Ferri et al.’s could account for the difference in effect size. First, there are the global, general, and diffuse issues related to the repetition of the task that occur in a sample of individuals from a different country, year, language, and culture. We cannot know which of these factors is more important in this case, but they have been described as crucial in assessing cognitive and emotional processes (<xref ref-type="bibr" rid="r4">Barrett, 2012</xref>; <xref ref-type="bibr" rid="r5">Barrett et al., 2007</xref>; <xref ref-type="bibr" rid="r23">Gutchess &amp; Rajaram, 2023</xref>; <xref ref-type="bibr" rid="r27">Immordino-Yang et al., 2016</xref>). A second set of differences is specific to the study and related to experimental design: we used a slightly larger sample (<italic>n</italic> = 55 compared to <italic>n</italic> = 41), which could account for some of the increase, as described (<xref ref-type="bibr" rid="r35">Loken &amp; Gelman, 2017</xref>). Ferri et al. used a reduced rating scale (1 to 4 compared to ours, 1 to 9), which could also explain some of the increase, as it changed our effect from <italic>g</italic> = 0.4 to <italic>g</italic> = 0.5 when we remapped our ratings to the reduced scale. In addition, Ferri et al. asked participants for emotional intensity only, whereas we asked for both intensity and valence, which could have resulted in a larger cognitive load for our participants. And finally, in our case, participants were sitting in an experimental room, and in Ferri et al.’s study they were inside a magnetic resonance scanner, which can affect the behavioral outcomes of a task, most probably in a paradigm-specific manner (<xref ref-type="bibr" rid="r3">Assecondi et al., 2010</xref>; <xref ref-type="bibr" rid="r31">Koch et al., 2003</xref>; <xref ref-type="bibr" rid="r32">Kolodny et al., 2022</xref>; <xref ref-type="bibr" rid="r51">van Maanen et al., 2016</xref>).</p>
<p>Aside from effect size issues regarding our replication of the previous protocol, nevertheless, the fact that not only intensity but also valence shifted in the predicted direction (increase) when switching from arousing to non-arousing focus, maintaining an effect size of <italic>g</italic> = 0.4, reinforces the robustness of the result and provides evidence that the task provides an experimental instance of AD.</p>
<sec><title>AD Estimates</title>
<p>Our analysis of the AD task proposed an estimate of the AD capacity based on the difference in participants’ ratings between non-arousing and arousing focus conditions. We reasoned that the <italic>change</italic> in emotional rating, depending on the focus (i.e., attentional) condition, would constitute a behavioral readout of such an estimate. It is interesting to note that while our AD estimates clearly show that participants indeed regulate their emotions during the task and that they are consistent, the valence-based and intensity-based metrics show an important degree of inter-individual variability. We think that explaining this variability is an important task to better understand AD as a process. One way forward in this regard is to study the physiological correlates of behavioral readouts. For instance, several reports have shown a relationship between emotion regulation capacities and physiological markers such as heart rate variability (<xref ref-type="bibr" rid="r38">Min et al., 2023</xref>). Another interesting way to unpack this variability is considering personality traits or ER traits to explain how individuals respond to negative visual stimuli and down-regulate their emotions. Neuroticism, for example, has been described as the tendency presented by some individuals to experience negative emotions and distress (<xref ref-type="bibr" rid="r39">Ormel et al., 2013</xref>). Several studies have offered evidence suggesting that neuroticism is associated with specific patterns of gaze behavior toward negative emotional stimuli (e.g., <xref ref-type="bibr" rid="r1">Armstrong et al., 2010</xref>) and may influence selective attention (<xref ref-type="bibr" rid="r42">Richards et al., 2014</xref>). Unfortunately, no instruments are available to measure individuals' disposition to use AD as a strategy. This contrasts with other ER strategies, such as reappraisal and suppression, which have received more attention from the research community and have more established measurement tools (<xref ref-type="bibr" rid="r37">McRae &amp; Gross, 2020</xref>). Future studies should develop tools that can contribute to measuring this ability.</p></sec>
<sec><title>Behavioral Correlates of AD</title>
<p>We aimed to fill the gap in the current literature regarding a deeper characterization of AD. Concerning behavioral correlates, interesting data were found in terms of response time. Specifically, participants were faster to produce emotional ratings in the arousing focus condition compared to the non-arousing focus condition. This difference was evident for both intensity and valence ratings, with 60% and 67% of participants, respectively, showing faster responses for the arousing focus. The difference was larger for the valence rating, which is the first rating the participants had to report after watching IAPS images. Interestingly, we found associations between the ratings’ response times and the rating values, specifically for the arousing focus condition in the unpleasant images; higher intensity ratings were associated with shorter time responses, and conversely, lower valence ratings (more negative) were associated with faster response time. Furthermore, we observed that focusing attention on arousing portions of pictures, within unpleasant emotional stimuli, tends to prompt faster responses. This aligns well with previous studies, where faster recognition of emotional stimuli was associated with higher intensity and lower valence ratings, in the case of emotional faces (<xref ref-type="bibr" rid="r48">Sato &amp; Yoshikawa, 2010</xref>), abstract emotional stimuli (<xref ref-type="bibr" rid="r6">Bartoszek &amp; Cervone, 2022</xref>) and also of emotional events in experience sampling methodologies (<xref ref-type="bibr" rid="r2">Arndt et al., 2018</xref>). Overall, these data offer supporting evidence to the view that attentional focus is key to emotional experience and particularly to the process by which emotions are generated (<xref ref-type="bibr" rid="r49">Storbeck &amp; Clore, 2007</xref>).</p></sec>
<sec><title>Neuropsychological Correlates of AD</title>
<p>No robust associations were found when exploring the relationship between attentional capacities, evaluated through the ANT, and AD performance. Only performance on the Alerting Network exhibited a small and negative association with valence-based AD (<italic>r</italic> = -.33, <italic>p</italic> = .05) and a positive association with intensity-based AD (<italic>r</italic> = .31, <italic>p</italic> = 0.06). The Alerting Network has been closely linked to arousal (<xref ref-type="bibr" rid="r40">Petersen &amp; Posner, 2012</xref>) and is described as responsible for achieving and maintaining an alert state. According to the review by <xref ref-type="bibr" rid="r12">Compton (2003)</xref>, this network is crucial for processing emotional stimuli, as emotional states can enhance alertness and thus influence how attentional resources are allocated. Our data does not support the involvement of other attentional networks related to the <italic>selection</italic> of information, <italic>shifting</italic> attention (Orienting Network), or <italic>resolving conflicts</italic> amongst competing responses (Executive Network). One plausible explanation is that AD, as measured by our task, does not recruit these networks, since participants are simply instructed to fix their gaze on arousing and non-arousing areas of emotionally negative pictures. To explore further the relationship between AD and attentional abilities, future studies could replicate this task with other well-known neuropsychological tools that measure attention, such as the Paced Auditory Serial Addition Test (PASAT, <xref ref-type="bibr" rid="r21">Gronwall, 1977</xref>) or the Continuous Performance Test (CPT, <xref ref-type="bibr" rid="r13">Conners &amp; Sitarenios, 2011</xref>). Another relevant area to explore is the assessment of AD in individuals with attentional disorders due to neurological damage, where AD impairment has been described (<xref ref-type="bibr" rid="r46">Salas et al., 2019</xref>).</p></sec>
<sec sec-type="conclusions"><title>Conclusion</title>
<p>In conclusion, our study contributes to the growing body of literature on AD specifically, and emotion-cognition interaction more generally, by providing a detailed behavioral characterization of a task that measures this emotion regulation strategy. It also offers novel data regarding the attentional correlates of AD, suggesting a potential role for the Alerting Network. While some of our findings challenge existing assumptions about arousing focus conditions in AD paradigms, they also open new avenues for methodological refinement in this field. Future research should continue to investigate the cognitive and neural bases of AD, employing diverse measures and paradigms to unravel the complex interplay between attention, emotional reactivity, and emotion regulation.</p></sec></sec>
</body>
<back>
	
	
	<fn-group content-type="author-contribution">
		<fn fn-type="con">
			<p><italic>Christian Salas</italic>: Conceptualisation, Methodology, Project administration, Funding acquisition, Supervision, Writing - original draft, Writing - review &amp; editing. <italic>Nicolas Núñez</italic>: Conceptualisation, Methodology, Software, Investigation, Data Curation, Writing - review &amp; editing. <italic>Luz María Pozo</italic>: Software, Investigation, Data Curation, Writing - review &amp; editing. <italic>Marko Bremer</italic>: Writing - review &amp; editing. <italic>Daniel Rojas-Líbano</italic>: Conceptualisation, Methodology, Software, Data curation, Formal analysis, Funding acquisition, Investigation, Project administration, Resources, Supervision, Validation, Visualisation, Writing - original draft, Writing - review &amp; editing.
			</p>
		</fn>
	</fn-group>
	<sec sec-type="ethics-statement">
		<title>Ethics Statement</title>
		<p>The present study was reviewed and approved by the Ethical Committee at Universidad Diego Portales, Facultad de Psicología, Santiago, Chile. All study participants read, reviewed, and signed an informed consent document before participation in the study.</p>
	</sec>
	
	
<ref-list><title>References</title>
<ref id="r1"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Armstrong</surname>, <given-names>T.</given-names></string-name>, <string-name name-style="western"><surname>Olatunji</surname>, <given-names>B. O.</given-names></string-name>, <string-name name-style="western"><surname>Sarawgi</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Simmons</surname>, <given-names>C.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Orienting and maintenance of gaze in contamination fear: Biases for disgust and fear cues.</article-title> <source>Behaviour Research and Therapy</source>, <volume>48</volume>(<issue>5</issue>), <fpage>402</fpage>–<lpage>408</lpage>. <pub-id pub-id-type="doi">10.1016/j.brat.2010.01.002</pub-id><pub-id pub-id-type="pmid">20138252</pub-id></mixed-citation></ref>
<ref id="r2"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Arndt</surname>, <given-names>C.</given-names></string-name>, <string-name name-style="western"><surname>Lischetzke</surname>, <given-names>T.</given-names></string-name>, <string-name name-style="western"><surname>Crayen</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Eid</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2018</year>). <article-title>The assessment of emotional clarity via response times to emotion items: Shedding light on the response process and its relation to emotion regulation strategies.</article-title> <source>Cognition and Emotion</source>, <volume>32</volume>(<issue>3</issue>), <fpage>530</fpage>–<lpage>548</lpage>. <pub-id pub-id-type="doi">10.1080/02699931.2017.1322039</pub-id><pub-id pub-id-type="pmid">28482749</pub-id></mixed-citation></ref>
<ref id="r3"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Assecondi</surname>, <given-names>S.</given-names></string-name>, <string-name name-style="western"><surname>Vanderperren</surname>, <given-names>K.</given-names></string-name>, <string-name name-style="western"><surname>Novitskiy</surname>, <given-names>N.</given-names></string-name>, <string-name name-style="western"><surname>Ramautar</surname>, <given-names>J. R.</given-names></string-name>, <string-name name-style="western"><surname>Fias</surname>, <given-names>W.</given-names></string-name>, <string-name name-style="western"><surname>Staelens</surname>, <given-names>S.</given-names></string-name>, <string-name name-style="western"><surname>Stiers</surname>, <given-names>P.</given-names></string-name>, <string-name name-style="western"><surname>Sunaert</surname>, <given-names>S.</given-names></string-name>, <string-name name-style="western"><surname>Van Huffel</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Lemahieu</surname>, <given-names>I.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Effect of the static magnetic field of the MR-scanner on ERPs: Evaluation of visual, cognitive and motor potentials.</article-title> <source>Clinical Neurophysiology</source>, <volume>121</volume>(<issue>5</issue>), <fpage>672</fpage>–<lpage>685</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2009.12.032</pub-id><pub-id pub-id-type="pmid">20097609</pub-id></mixed-citation></ref>
<ref id="r4"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Barrett</surname>, <given-names>L. F.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Emotions are real.</article-title> <source>Emotion</source>, <volume>12</volume>(<issue>3</issue>), <fpage>413</fpage>–<lpage>429</lpage>. <pub-id pub-id-type="doi">10.1037/a0027555</pub-id><pub-id pub-id-type="pmid">22642358</pub-id></mixed-citation></ref>
<ref id="r5"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Barrett</surname>, <given-names>L. F.</given-names></string-name>, <string-name name-style="western"><surname>Mesquita</surname>, <given-names>B.</given-names></string-name>, <string-name name-style="western"><surname>Ochsner</surname>, <given-names>K. N.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Gross</surname>, <given-names>J. J.</given-names></string-name></person-group> (<year>2007</year>). <article-title>The experience of emotion.</article-title> <source>Annual Review of Psychology</source>, <volume>58</volume>(<issue>1</issue>), <fpage>373</fpage>–<lpage>403</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.psych.58.110405.085709</pub-id><pub-id pub-id-type="pmid">17002554</pub-id></mixed-citation></ref>
<ref id="r6"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Bartoszek</surname>, <given-names>G.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Cervone</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2022</year>). <article-title>Measuring distinct emotional states implicitly: The role of response speed.</article-title> <source>Emotion</source>, <volume>22</volume>(<issue>5</issue>), <fpage>954</fpage>–<lpage>970</lpage>. <pub-id pub-id-type="doi">10.1037/emo0000894</pub-id><pub-id pub-id-type="pmid">32852963</pub-id></mixed-citation></ref>
<ref id="r7"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Brainard</surname>, <given-names>D. H.</given-names></string-name></person-group> (<year>1997</year>). <article-title>The psychophysics toolbox.</article-title> <source>Spatial Vision</source>, <volume>10</volume>(<issue>4</issue>), <fpage>433</fpage>–<lpage>436</lpage>. <pub-id pub-id-type="doi">10.1163/156856897X00357</pub-id></mixed-citation></ref>
<ref id="r8"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Bower</surname>, <given-names>G. H.</given-names></string-name>, <string-name name-style="western"><surname>Sahgal</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Routh</surname>, <given-names>D. A.</given-names></string-name></person-group> (<year>1983</year>). <article-title>Affect and cognition.</article-title> <source>Philosophical Transactions of the Royal Society of London</source>, <volume>302</volume>(<issue>1110</issue>), <fpage>387</fpage>–<lpage>402</lpage>.</mixed-citation></ref>
<ref id="r9"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Button</surname>, <given-names>K. S.</given-names></string-name>, <string-name name-style="western"><surname>Ioannidis</surname>, <given-names>J. P. A.</given-names></string-name>, <string-name name-style="western"><surname>Mokrysz</surname>, <given-names>C.</given-names></string-name>, <string-name name-style="western"><surname>Nosek</surname>, <given-names>B. A.</given-names></string-name>, <string-name name-style="western"><surname>Flint</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Robinson</surname>, <given-names>E. S. J.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Munafò</surname>, <given-names>M. R.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Power failure: Why small sample size undermines the reliability of neuroscience.</article-title> <source>Nature Reviews Neuroscience</source>, <volume>14</volume>(<issue>5</issue>), <fpage>365</fpage>–<lpage>376</lpage>. <pub-id pub-id-type="doi">10.1038/nrn3475</pub-id><pub-id pub-id-type="pmid">23571845</pub-id></mixed-citation></ref>
<ref id="r10"><mixed-citation publication-type="book">Clore, G., &amp; Schiller, A. (2016). New light on the affect-cognition connection. In L. Feldman, M. Lewis &amp; J. Haviland-Jones (Eds.), <italic>Handbook of emotions</italic> (pp. 532–546). Guilford Press.</mixed-citation></ref>
<ref id="r11"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Compton</surname>, <given-names>R. J.</given-names></string-name></person-group> (<year>2000</year>). <article-title>Ability to disengage attention predicts negative affect.</article-title> <source>Cognition and Emotion</source>, <volume>14</volume>(<issue>3</issue>), <fpage>401</fpage>–<lpage>415</lpage>. <pub-id pub-id-type="doi">10.1080/026999300378897</pub-id></mixed-citation></ref>
<ref id="r12"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Compton</surname>, <given-names>R. J.</given-names></string-name></person-group> (<year>2003</year>). <article-title>The interface between emotion and attention: A review of evidence from psychology and neuroscience.</article-title> <source>Behavioral and Cognitive Neuroscience Reviews</source>, <volume>2</volume>(<issue>2</issue>), <fpage>115</fpage>–<lpage>129</lpage>. <pub-id pub-id-type="doi">10.1177/1534582303255278</pub-id></mixed-citation></ref>
<ref id="r13"><mixed-citation publication-type="book">Conners, C. K., &amp; Sitarenios, G. (2011). Conners’ Continuous Performance Test (CPT). In J. S. Kreutzer, J. DeLuca &amp; B. Caplan (Eds.), <italic>Encyclopedia of clinical neuropsychology</italic> (pp. 681–683). Springer New York.</mixed-citation></ref>
<ref id="r14"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Dunning</surname>, <given-names>J. P.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Hajcak</surname>, <given-names>G.</given-names></string-name></person-group> (<year>2009</year>). <article-title>See no evil: Directing visual attention within unpleasant images modulates the electrocortical response.</article-title> <source>Psychophysiology</source>, <volume>46</volume>(<issue>1</issue>), <fpage>28</fpage>–<lpage>33</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.2008.00723.x</pub-id><pub-id pub-id-type="pmid">18992071</pub-id></mixed-citation></ref>
<ref id="r15"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Ernst</surname>, <given-names>M. D.</given-names></string-name></person-group> (<year>2004</year>). <article-title>Permutation methods: A basis for exact inference.</article-title> <source>Statistical Science</source><italic>, </italic><volume>19</volume>(<issue>4</issue>), <fpage>676</fpage>–<lpage>685</lpage>. <pub-id pub-id-type="doi">10.1214/088342304000000396</pub-id></mixed-citation></ref>
<ref id="r16"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Fan</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>McCandliss</surname>, <given-names>B. D.</given-names></string-name>, <string-name name-style="western"><surname>Sommer</surname>, <given-names>T.</given-names></string-name>, <string-name name-style="western"><surname>Raz</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Posner</surname>, <given-names>M. I.</given-names></string-name></person-group> (<year>2002</year>). <article-title>Testing the efficiency and independence of attentional networks.</article-title> <source>Journal of Cognitive Neuroscience</source>, <volume>14</volume>(<issue>3</issue>), <fpage>340</fpage>–<lpage>347</lpage>. <pub-id pub-id-type="doi">10.1162/089892902317361886</pub-id><pub-id pub-id-type="pmid">11970796</pub-id></mixed-citation></ref>
<ref id="r17"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Ferri</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Schmidt</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Hajcak</surname>, <given-names>G.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Canli</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Neural correlates of attentional deployment within unpleasant pictures.</article-title> <source>NeuroImage</source>, <volume>70</volume>, <fpage>268</fpage>–<lpage>277</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2012.12.030</pub-id><pub-id pub-id-type="pmid">23270876</pub-id></mixed-citation></ref>
<ref id="r18"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Ferri</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Schmidt</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Hajcak</surname>, <given-names>G.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Canli</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Emotion regulation and amygdala-precuneus connectivity: Focusing on attentional deployment.</article-title> <source>Cognitive, Affective &amp; Behavioral Neuroscience</source>, <volume>16</volume>(<issue>6</issue>), <fpage>991</fpage>–<lpage>1002</lpage>. <pub-id pub-id-type="doi">10.3758/s13415-016-0447-y</pub-id><pub-id pub-id-type="pmid">27444935</pub-id></mixed-citation></ref>
<ref id="r19"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Forgas</surname>, <given-names>J. P.</given-names></string-name></person-group> (<year>2008</year>). <article-title>Affect and cognition.</article-title> <source>Perspectives on Psychological Science</source><italic>, </italic><volume>3</volume>(<issue>2</issue>), <fpage>94</fpage>–<lpage>101</lpage>. <pub-id pub-id-type="doi">10.1111/j.1745-6916.2008.00067.x</pub-id></mixed-citation></ref>
<ref id="r20"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Gelman</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Carlin</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Beyond power calculations: Assessing Type S (Sign) and Type M (Magnitude) errors.</article-title> <source>Perspectives on Psychological Science</source><italic>, </italic><volume>9</volume>(<issue>6</issue>), <fpage>641</fpage>–<lpage>651</lpage>. <pub-id pub-id-type="doi">10.1177/1745691614551642</pub-id></mixed-citation></ref>
<ref id="r21"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Gronwall</surname>, <given-names>D. M.</given-names></string-name></person-group> (<year>1977</year>). <article-title>Paced auditory serial-addition task: A measure of recovery from concussion.</article-title> <source>Perceptual and Motor Skills</source>, <volume>44</volume>(<issue>2</issue>), <fpage>367</fpage>–<lpage>373</lpage>. <pub-id pub-id-type="doi">10.2466/pms.1977.44.2.367</pub-id><pub-id pub-id-type="pmid">866038</pub-id></mixed-citation></ref>
<ref id="r22"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Gross</surname>, <given-names>J. J.</given-names></string-name></person-group> (<year>1998</year>). <article-title>The emerging field of emotion regulation: An integrative review.</article-title> <source>Review of General Psychology</source>, <volume>2</volume>(<issue>3</issue>), <fpage>271</fpage>–<lpage>299</lpage>. <pub-id pub-id-type="doi">10.1037/1089-2680.2.3.271</pub-id></mixed-citation></ref>
<ref id="r23"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Gutchess</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Rajaram</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2023</year>). <article-title>Consideration of culture in cognition: How we can enrich methodology and theory.</article-title> <source>Psychonomic Bulletin &amp; Review</source>, <volume>30</volume>(<issue>3</issue>), <fpage>914</fpage>–<lpage>931</lpage>. <pub-id pub-id-type="doi">10.3758/s13423-022-02227-5</pub-id><pub-id pub-id-type="pmid">36510095</pub-id></mixed-citation></ref>
<ref id="r24"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Hajcak</surname>, <given-names>G.</given-names></string-name>, <string-name name-style="western"><surname>Dunning</surname>, <given-names>J. P.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Foti</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2009</year>). <article-title>Motivated and controlled attention to emotion: Time-course of the late positive potential.</article-title> <source>Clinical Neurophysiology</source>, <volume>120</volume>(<issue>3</issue>), <fpage>505</fpage>–<lpage>510</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2008.11.028</pub-id><pub-id pub-id-type="pmid">19157974</pub-id></mixed-citation></ref>
<ref id="r25"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Hedges</surname>, <given-names>L. V.</given-names></string-name></person-group> (<year>1981</year>). <article-title>Distribution theory for Glass’s estimator of effect size and related estimators.</article-title> <source>Journal of Educational and Behavioral Statistics</source>, <volume>6</volume>(<issue>2</issue>), <fpage>107</fpage>–<lpage>128</lpage>. <pub-id pub-id-type="doi">10.3102/10769986006002107</pub-id></mixed-citation></ref>
<ref id="r26"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Hentschke</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Stüttgen</surname>, <given-names>M. C.</given-names></string-name></person-group> (<year>2011</year>). <article-title>Computation of measures of effect size for neuroscience data sets.</article-title> <source>European Journal of Neuroscience</source>, <volume>34</volume>(<issue>12</issue>), <fpage>1887</fpage>–<lpage>1894</lpage>. <pub-id pub-id-type="doi">10.1111/j.1460-9568.2011.07902.x</pub-id><pub-id pub-id-type="pmid">22082031</pub-id></mixed-citation></ref>
<ref id="r27"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Immordino-Yang</surname>, <given-names>M. H.</given-names></string-name>, <string-name name-style="western"><surname>Yang</surname>, <given-names>X.-F.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Damasio</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Cultural modes of expressing emotions influence how emotions are experienced.</article-title> <source>Emotion</source>, <volume>16</volume>(<issue>7</issue>), <fpage>1033</fpage>–<lpage>1039</lpage>. <pub-id pub-id-type="doi">10.1037/emo0000201</pub-id><pub-id pub-id-type="pmid">27270077</pub-id></mixed-citation></ref>
<ref id="r28"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Isaacowitz</surname>, <given-names>D. M.</given-names></string-name>, <string-name name-style="western"><surname>Toner</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Neupert</surname>, <given-names>S. D.</given-names></string-name></person-group> (<year>2009</year>). <article-title>Use of gaze for real-time mood regulation: Effects of age and attentional functioning.</article-title> <source>Psychology and Aging</source>, <volume>24</volume>(<issue>4</issue>), <fpage>989</fpage>–<lpage>994</lpage>. <pub-id pub-id-type="doi">10.1037/a0017706</pub-id><pub-id pub-id-type="pmid">20025412</pub-id></mixed-citation></ref>
<ref id="r29"><mixed-citation publication-type="book">Isen, A. M. (1984). Toward understanding the role of affect in cognition. In R. S. Wyer, Jr. &amp; T. K. Srull (Eds.), <italic>Handbook of social cognition</italic> (Vol. 3, pp. 179–236). Lawrence Erlbaum Associates Publishers.</mixed-citation></ref>
<ref id="r30"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Kleiner</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Brainard</surname>, <given-names>D.</given-names></string-name>, <string-name name-style="western"><surname>Pelli</surname>, <given-names>D.</given-names></string-name>, <string-name name-style="western"><surname>Ingling</surname>, <given-names>A.</given-names></string-name>, <string-name name-style="western"><surname>Murray</surname>, <given-names>R.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Broussard</surname>, <given-names>C.</given-names></string-name></person-group> (<year>2007</year>). <article-title>What’s new in Psychtoolbox-3?</article-title> <source>Perception</source>, <volume>36</volume>(<issue>14</issue>). <ext-link ext-link-type="uri" xlink:href="https://nyuscholars.nyu.edu/en/publications/whats-new-in-psychtoolbox-3">https://nyuscholars.nyu.edu/en/publications/whats-new-in-psychtoolbox-3</ext-link></mixed-citation></ref>
<ref id="r31"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Koch</surname>, <given-names>I.</given-names></string-name>, <string-name name-style="western"><surname>Ruge</surname>, <given-names>H.</given-names></string-name>, <string-name name-style="western"><surname>Brass</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Rubin</surname>, <given-names>O.</given-names></string-name>, <string-name name-style="western"><surname>Meiran</surname>, <given-names>N.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Prinz</surname>, <given-names>W.</given-names></string-name></person-group> (<year>2003</year>). <article-title>Equivalence of cognitive processes in brain imaging and behavioral studies: Evidence from task switching.</article-title> <source>NeuroImage</source>, <volume>20</volume>(<issue>1</issue>), <fpage>572</fpage>–<lpage>577</lpage>. <pub-id pub-id-type="doi">10.1016/S1053-8119(03)00206-4</pub-id><pub-id pub-id-type="pmid">14527617</pub-id></mixed-citation></ref>
<ref id="r32"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Kolodny</surname>, <given-names>T.</given-names></string-name>, <string-name name-style="western"><surname>Mevorach</surname>, <given-names>C.</given-names></string-name>, <string-name name-style="western"><surname>Stern</surname>, <given-names>P.</given-names></string-name>, <string-name name-style="western"><surname>Ankaoua</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Dankner</surname>, <given-names>Y.</given-names></string-name>, <string-name name-style="western"><surname>Tsafrir</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Shalev</surname>, <given-names>L.</given-names></string-name></person-group> (<year>2022</year>). <article-title>Are attention and cognitive control altered by fMRI scanner environment? Evidence from Go/No-go tasks in ADHD.</article-title> <source>Brain Imaging and Behavior</source>, <volume>16</volume>(<issue>3</issue>), <fpage>1003</fpage>–<lpage>1013</lpage>. <pub-id pub-id-type="doi">10.1007/s11682-021-00557-x</pub-id><pub-id pub-id-type="pmid">34705186</pub-id></mixed-citation></ref>
<ref id="r33"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Kopal</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Uddin</surname>, <given-names>L. Q.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Bzdok</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2023</year>). <article-title>The end game: Respecting major sources of population diversity.</article-title> <source>Nature Methods</source>, <volume>20</volume>(<issue>8</issue>), <fpage>1122</fpage>–<lpage>1128</lpage>. <pub-id pub-id-type="doi">10.1038/s41592-023-01812-3</pub-id><pub-id pub-id-type="pmid">36869122</pub-id></mixed-citation></ref>
<ref id="r34"><mixed-citation publication-type="web">Lang, P. J., Bradley, M. M., &amp; Cuthbert, B. N. (2008). <italic>International Affective Picture System (IAPS): Affective ratings of pictures and instruction manual (Technical Report A-8)</italic>. University of Florida, Gainesville. <ext-link ext-link-type="uri" xlink:href="https://gitlab.pavlovia.org/rsaitov/experimental-psycholoy-ltu-final/raw/d3b3ec5364d25179c983d82c944baf04e06fd7ee/IAPS.TechManual.1-20.2008.pdf">https://gitlab.pavlovia.org/rsaitov/experimental-psycholoy-ltu-final/raw/d3b3ec5364d25179c983d82c944baf04e06fd7ee/IAPS.TechManual.1-20.2008.pdf</ext-link></mixed-citation></ref>
<ref id="r35"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Loken</surname>, <given-names>E.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Gelman</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2017</year>). <article-title>Measurement error and the replication crisis.</article-title> <source>Science</source>, <volume>355</volume>(<issue>6325</issue>), <fpage>584</fpage>–<lpage>585</lpage>. <pub-id pub-id-type="doi">10.1126/science.aal3618</pub-id><pub-id pub-id-type="pmid">28183939</pub-id></mixed-citation></ref>
<ref id="r36"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Maris</surname>, <given-names>E.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Oostenveld</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2007</year>). <article-title>Nonparametric statistical testing of EEG- and MEG-data.</article-title> <source>Journal of Neuroscience Methods</source>, <volume>164</volume>(<issue>1</issue>), <fpage>177</fpage>–<lpage>190</lpage>. <pub-id pub-id-type="doi">10.1016/j.jneumeth.2007.03.024</pub-id><pub-id pub-id-type="pmid">17517438</pub-id></mixed-citation></ref>
<ref id="r37"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>McRae</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Gross</surname>, <given-names>J. J.</given-names></string-name></person-group> (<year>2020</year>). <article-title>Emotion regulation.</article-title> <source>Emotion</source>, <volume>20</volume>(<issue>1</issue>), <fpage>1</fpage>–<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1037/emo0000703</pub-id><pub-id pub-id-type="pmid">31961170</pub-id></mixed-citation></ref>
<ref id="r38"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Min</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Koenig</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Nashiro</surname>, <given-names>K.</given-names></string-name>, <string-name name-style="western"><surname>Yoo</surname>, <given-names>H. J.</given-names></string-name>, <string-name name-style="western"><surname>Cho</surname>, <given-names>C.</given-names></string-name>, <string-name name-style="western"><surname>Thayer</surname>, <given-names>J. F.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Mather</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2023</year>). <article-title>Sex differences in neural correlates of emotion regulation in relation to resting heart rate variability.</article-title> <source>Brain Topography</source>, <volume>36</volume>(<issue>5</issue>), <fpage>698</fpage>–<lpage>709</lpage>. <pub-id pub-id-type="doi">10.1007/s10548-023-00974-9</pub-id><pub-id pub-id-type="pmid">37353651</pub-id></mixed-citation></ref>
<ref id="r39"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Ormel</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Jeronimus</surname>, <given-names>B. F.</given-names></string-name>, <string-name name-style="western"><surname>Kotov</surname>, <given-names>R.</given-names></string-name>, <string-name name-style="western"><surname>Riese</surname>, <given-names>H.</given-names></string-name>, <string-name name-style="western"><surname>Bos</surname>, <given-names>E. H.</given-names></string-name>, <string-name name-style="western"><surname>Hankin</surname>, <given-names>B.</given-names></string-name>, <string-name name-style="western"><surname>Rosmalen</surname>, <given-names>J. G. M.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Oldehinkel</surname>, <given-names>A. J.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Neuroticism and common mental disorders: Meaning and utility of a complex relationship.</article-title> <source>Clinical Psychology Review</source>, <volume>33</volume>(<issue>5</issue>), <fpage>686</fpage>–<lpage>697</lpage>. <pub-id pub-id-type="doi">10.1016/j.cpr.2013.04.003</pub-id><pub-id pub-id-type="pmid">23702592</pub-id></mixed-citation></ref>
<ref id="r40"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Petersen</surname>, <given-names>S. E.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Posner</surname>, <given-names>M. I.</given-names></string-name></person-group> (<year>2012</year>). <article-title>The attention system of the human brain: 20 years after.</article-title> <source>Annual Review of Neuroscience</source>, <volume>35</volume>, <fpage>73</fpage>–<lpage>89</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-neuro-062111-150525</pub-id></mixed-citation></ref>
<ref id="r41"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Posner</surname>, <given-names>M. I.</given-names></string-name>, <string-name name-style="western"><surname>Rothbart</surname>, <given-names>M. K.</given-names></string-name>, <string-name name-style="western"><surname>Sheese</surname>, <given-names>B. E.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Voelker</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Developing attention: Behavioral and brain mechanisms.</article-title> <source>Advances in Neuroscience</source>, <volume>2014</volume>, <elocation-id>405094</elocation-id>. <pub-id pub-id-type="doi">10.1155/2014/405094</pub-id><pub-id pub-id-type="pmid">25110757</pub-id></mixed-citation></ref>
<ref id="r42"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Richards</surname>, <given-names>H. J.</given-names></string-name>, <string-name name-style="western"><surname>Benson</surname>, <given-names>V.</given-names></string-name>, <string-name name-style="western"><surname>Donnelly</surname>, <given-names>N.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Hadwin</surname>, <given-names>J. A.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Exploring the function of selective attention and hypervigilance for threat in anxiety.</article-title> <source>Clinical Psychology Review</source>, <volume>34</volume>(<issue>1</issue>), <fpage>1</fpage>–<lpage>13</lpage>. <pub-id pub-id-type="doi">10.1016/j.cpr.2013.10.006</pub-id><pub-id pub-id-type="pmid">24286750</pub-id></mixed-citation></ref>
	<ref id="r43"><mixed-citation publication-type="web">Rojas Libano, D. (2025). <italic>Attentional-Deployment-Data-2018-2021</italic> [Figshare project page containing behavioral and eye-tracking data]. Figshare. <pub-id pub-id-type="doi">10.6084/m9.figshare.26406529</pub-id></mixed-citation></ref>
<ref id="r44"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Salas</surname>, <given-names>C. E.</given-names></string-name>, <string-name name-style="western"><surname>Castro</surname>, <given-names>O.</given-names></string-name>, <string-name name-style="western"><surname>Yuen</surname>, <given-names>K. S.</given-names></string-name>, <string-name name-style="western"><surname>Radovic</surname>, <given-names>D.</given-names></string-name>, <string-name name-style="western"><surname>d’Avossa</surname>, <given-names>G.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Turnbull</surname>, <given-names>O. H.</given-names></string-name></person-group> (<year>2016</year>). <article-title>“Just can’t hide it”: A behavioral and lesion study on emotional response modulation after right prefrontal damage.</article-title> <source>Social Cognitive and Affective Neuroscience</source>, <volume>11</volume>(<issue>10</issue>), <fpage>1528</fpage>–<lpage>1540</lpage>. <pub-id pub-id-type="doi">10.1093/scan/nsw075</pub-id><pub-id pub-id-type="pmid">27317928</pub-id></mixed-citation></ref>
	<ref id="r45"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Salas</surname>, <given-names>C. E.</given-names></string-name>, <string-name name-style="western"><surname>Gross</surname>, <given-names>J. J.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Turnbull</surname>, <given-names>O. H.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Reappraisal generation after acquired brain damage: The role of laterality and cognitive control.</article-title> <source>Frontiers in Psychology</source>, <volume>5</volume>, <elocation-id>242</elocation-id>. <pub-id pub-id-type="doi">10.3389/fpsyg.2014.00242</pub-id><pub-id pub-id-type="pmid">24711799</pub-id></mixed-citation></ref>
<ref id="r46"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Salas</surname>, <given-names>C. E.</given-names></string-name>, <string-name name-style="western"><surname>Gross</surname>, <given-names>J. J.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Turnbull</surname>, <given-names>O. H.</given-names></string-name></person-group> (<year>2019</year>). <article-title>Using the process model to understand emotion regulation changes after brain injury.</article-title> <source>Psychology &amp; Neuroscience</source>, <volume>12</volume>(<issue>4</issue>), <fpage>430</fpage>–<lpage>450</lpage>. <pub-id pub-id-type="doi">10.1037/pne0000174</pub-id></mixed-citation></ref>
<ref id="r47"><mixed-citation publication-type="data">Salas, C., Núñez, N., Pozo, L. M., Bremer, M., &amp; Rojas-Líbano, D. (2025a). <italic>Attentional-Deployment-Task</italic> [GitHub project pages containing Matlab code for controlling hardware and implementing task; and the code used to analyze the data, produce plots, and compute statistics]. GitHub. <ext-link ext-link-type="uri" xlink:href="https://github.com/dirl75/Attentional-Deployment-Task">https://github.com/dirl75/Attentional-Deployment-Task</ext-link></mixed-citation></ref>
	<ref id="r47.5"><mixed-citation publication-type="data">Salas, C., Núñez, N., Pozo, L. M., Bremer, M., &amp; Rojas-Líbano, D. (2025b). <italic>Attentional-Deployment-Analysis</italic> [GitHub project page containing the image files used as stimuli for task trials]. GitHub. <ext-link ext-link-type="uri" xlink:href="https://github.com/dirl75/Attentional-Deployment-Analysis">https://github.com/dirl75/Attentional-Deployment-Analysis</ext-link></mixed-citation></ref>
<ref id="r48"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Sato</surname>, <given-names>W.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Yoshikawa</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Detection of emotional facial expressions and anti-expressions.</article-title> <source>Visual Cognition</source>, <volume>18</volume>(<issue>3</issue>), <fpage>369</fpage>–<lpage>388</lpage>. <pub-id pub-id-type="doi">10.1080/13506280902767763</pub-id></mixed-citation></ref>
<ref id="r49"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Storbeck</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Clore</surname>, <given-names>G. L.</given-names></string-name></person-group> (<year>2007</year>). <article-title>On the interdependence of cognition and emotion.</article-title> <source>Cognition and Emotion</source>, <volume>21</volume>(<issue>6</issue>), <fpage>1212</fpage>–<lpage>1237</lpage>. <pub-id pub-id-type="doi">10.1080/02699930701438020</pub-id><pub-id pub-id-type="pmid">18458789</pub-id></mixed-citation></ref>
<ref id="r50"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Urry</surname>, <given-names>H. L.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Gross</surname>, <given-names>J. J.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Emotion regulation in older age.</article-title> <source>Current Directions in Psychological Science</source>, <volume>19</volume>(<issue>6</issue>), <fpage>352</fpage>–<lpage>357</lpage>. <pub-id pub-id-type="doi">10.1177/0963721410388395</pub-id></mixed-citation></ref>
<ref id="r51"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>van Maanen</surname>, <given-names>L.</given-names></string-name>, <string-name name-style="western"><surname>Forstmann</surname>, <given-names>B. U.</given-names></string-name>, <string-name name-style="western"><surname>Keuken</surname>, <given-names>M. C.</given-names></string-name>, <string-name name-style="western"><surname>Wagenmakers</surname>, <given-names>E.-J.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Heathcote</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2016</year>). <article-title>The impact of MRI scanner environment on perceptual decision-making.</article-title> <source>Behavior Research Methods</source>, <volume>48</volume>(<issue>1</issue>), <fpage>184</fpage>–<lpage>200</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-015-0563-6</pub-id><pub-id pub-id-type="pmid">25701105</pub-id></mixed-citation></ref>
	<ref id="r52"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Xiu</surname>, <given-names>L.</given-names></string-name>, <string-name name-style="western"><surname>Wu</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Chang</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Zhou</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2018</year>). <article-title>Working memory training improves emotion regulation ability.</article-title> <source>Scientific Reports</source>, <volume>8</volume>(<issue>1</issue>), <elocation-id>15012</elocation-id>. <pub-id pub-id-type="doi">10.1038/s41598-018-31495-2</pub-id><pub-id pub-id-type="pmid">30301906</pub-id></mixed-citation></ref>
</ref-list><fn-group><fn fn-type="financial-disclosure">
<p content-type="fn-title">Daniel Rojas Líbano receives funding from Agencia Nacional de Investigación y Desarrollo (ANID), Fondo Nacional de Desarrollo Científico y Tecnológico (Fondecyt), Chile, Project # 1230481. Christian Salas receives funding from Agencia Nacional de Investigación y Desarrollo (ANID), Fondo Nacional de Desarrollo Científico y Tecnológico (Fondecyt), Chile, Project # 1231200.</p></fn></fn-group><ack><title>Acknowledgements</title>
<p>The authors would like to express their heartfelt gratitude to their friends and colleagues at CENHN for enriching conversations and discussions, which make our center an inspiring place for research.</p></ack>
<bio id="bio1">
<p><bold>Christian Salas</bold> is a clinical neuropsychologist and Director of the Clinical Neuropsychology Unit at Diego Portales University in Chile, where he also serves as an Associate Professor at the Center for Human Neuroscience and Neuropsychology (CENHN). His research explores changes in emotion regulation following acquired brain injury. He holds a PhD in Psychology from Bangor University, Wales, United Kingdom.</p>
</bio>
<bio id="bio2">
<p><bold>Nicolás Nuñez</bold> is a clinical psychologist and associate professor at Andres Bello University in Chile. He co-founded Origamis therapy center, a clinical center located in Santiago, Chile, for the treatment of mental diseases. His work focuses on the clinical care of patients, couples and families, with problems of emotional origin. He has a Master’s in Social Neuroscience from Diego Portales University in Chile.</p>
</bio>
<bio id="bio3">
<p><bold>Luz María Pozo</bold> holds a BS in Biochemistry from Pontificia Universidad Católica de Chile and a Master’s in Social Neuroscience from Universidad Diego Portales, Chile. She currently works as a high school science teacher in Chile. She has a particular interest in bridging scientific knowledge with education to promote meaningful learning experiences in the classroom.</p>
</bio>
<bio id="bio4">
<p><bold>Marko Bremer</bold> is a clinical and educational psychologist with a private practice and work as an educational consultant that develops projects related to education and childhood. His work focuses on clinical care and educational interventions within childhood contexts. He holds a Master’s in Social Neuroscience from Diego Portales University in Chile.</p>
</bio>
<bio id="bio5">
<p><bold>Daniel Rojas</bold>-<bold>Líbano</bold> is a biologist and researcher at the Centro de Estudios en Neurociencia Humana y Neuropsicología (CENHN) at Universidad Diego Portales, Chile. His work focuses on psychophysiological processes across a range of experimental tasks, with a strong emphasis on replication studies. He earned his PhD in Neurobiology from the University of Chicago.</p>
</bio>
	<sec sec-type="data-availability" id="das"><title>Data Availability</title>
		<p>All the data collected and used to compute the results presented in this article are available at <xref ref-type="bibr" rid="r43">Rojas-Libano (2025)</xref>.</p></sec>	
	
	
	
	<sec sec-type="supplementary-material" id="sp1"><title>Supplementary Materials</title>
		<table-wrap position="anchor">
			<table frame='void' style="background-#f3f3f3">
				<col width="60%" align="left"/>
				<col width="40%" align="left"/>
				<thead>
					<tr>
						<th>Type of supplementary materials</th>
						<th>Availability/Access</th>
					</tr>
				</thead>
				<tbody>
					<tr>
						<th colspan="2">Code</th>						
					</tr>
					<tr>
						<td>Matlab code for controlling hardware and implementing task; and the code used to analyze data, produce plots, and compute statistics.</td>
						<td><xref ref-type="bibr" rid="r47">Salas et al. (2025a)</xref>, <xref ref-type="bibr" rid="r47.5">Salas et al. (2025b)</xref></td>
					</tr>
					<tr>
						<th colspan="2">Data</th>						
					</tr>
					<tr>
						<td>Behavioral and eye-tracking data from attentional deployment task.</td>
						<td><xref ref-type="bibr" rid="r43">Rojas-Libano (2025)</xref></td>
					</tr>
					<tr>
						<th colspan="2">Material</th>						
					</tr>
					<tr>
						<td>Image files used as stimuli for task trials.</td>
						<td><xref ref-type="bibr" rid="r47">Salas et al. (2025a)</xref></td>
					</tr>					
				</tbody>
			</table>
		</table-wrap>		
	</sec>
			

<fn-group>
<fn fn-type="conflict"><p>The authors have declared that no competing interests exist.</p></fn>
</fn-group>
</back>
</article>