<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article
  PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD with MathML3 v1.2 20190208//EN" "JATS-journalpublishing1-mathml3.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" article-type="research-article" dtd-version="1.2" xml:lang="en">
<front>
<journal-meta><journal-id journal-id-type="publisher-id">EJOP</journal-id><journal-id journal-id-type="nlm-ta">Eur J Psychol</journal-id>
<journal-title-group>
<journal-title>Europe's Journal of Psychology</journal-title><abbrev-journal-title abbrev-type="pubmed">Eur. J. Psychol.</abbrev-journal-title>
</journal-title-group>
<issn pub-type="epub">1841-0413</issn>
<publisher><publisher-name>PsychOpen</publisher-name></publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">ejop.14685</article-id>
<article-id pub-id-type="doi">10.5964/ejop.14685</article-id>
<article-categories>
<subj-group subj-group-type="heading"><subject>Research Reports</subject></subj-group>
<subj-group subj-group-type="badge">
<subject>Data</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>One-Minute Silent Video Clips: A Database of Valence and Arousal</article-title>
<alt-title alt-title-type="right-running">Silent Video Clips: A Database of Valence and Arousal</alt-title>
<alt-title specific-use="APA-reference-style" xml:lang="en">One-minute silent video clips: A database of valence and arousal</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes"><name name-style="western"><surname>Kosonogov</surname><given-names>Vladimir</given-names></name><xref ref-type="corresp" rid="cor1">*</xref><xref ref-type="aff" rid="aff1"><sup>1</sup></xref></contrib>
<contrib contrib-type="author"><name name-style="western"><surname>Efimov</surname><given-names>Kirill</given-names></name><xref ref-type="aff" rid="aff2"><sup>2</sup></xref></contrib>
<contrib contrib-type="author"><name name-style="western"><surname>Kuskova</surname><given-names>Olga</given-names></name><xref ref-type="aff" rid="aff2"><sup>2</sup></xref></contrib>
<contrib contrib-type="author"><name name-style="western"><surname>Blank</surname><given-names>Isak B.</given-names></name><xref ref-type="aff" rid="aff2"><sup>2</sup></xref></contrib>
<contrib contrib-type="editor">
<name>
	<surname>Cancer</surname>
	<given-names>Alice</given-names>
</name>
<xref ref-type="aff" rid="aff3"/>
</contrib>
<aff id="aff1"><label>1</label><institution content-type="dept">Affective Psychophysiology Laboratory, Institute of Health Psychology</institution>, <institution>HSE University</institution>, <addr-line><city>Saint Petersburg</city></addr-line>, <country country="RU">Russian Federation</country></aff>
<aff id="aff2"><label>2</label><institution content-type="dept">Institute for Cognitive Neuroscience</institution>, <institution>HSE University</institution>, <addr-line><city>Moscow</city></addr-line>, <country country="RU">Russian Federation</country></aff>
	<aff id="aff3">Università Cattolica del Sacro Cuore, Milan, <country>Italy</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>*</label>190068, Griboyedova 123, Saint Petersburg, Russian Federation. <email xlink:href="vkosonogov@hse.ru">vkosonogov@hse.ru</email></corresp>
</author-notes>
<pub-date date-type="pub" publication-format="electronic"><day>29</day><month>08</month><year>2025</year></pub-date>
	<pub-date pub-type="collection" publication-format="electronic"><year>2025</year></pub-date>
<volume>21</volume>
<issue>3</issue>
<fpage>152</fpage>
<lpage>159</lpage>
<history>
<date date-type="received">
<day>20</day>
<month>05</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>11</day>
<month>04</month>
<year>2025</year>
</date>
</history>
<permissions><copyright-year>2025</copyright-year><copyright-holder>Kosonogov, Efimov, Kuskova, &amp; Blank</copyright-holder><license license-type="open-access" specific-use="CC BY 4.0" xlink:href="https://creativecommons.org/licenses/by/4.0/"><ali:license_ref>https://creativecommons.org/licenses/by/4.0/</ali:license_ref><license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution (CC BY) 4.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p></license></permissions>
	
	<abstract>
		<p>Researchers in the behavioral and social sciences use linear discriminant analysis (LDA) for predictions of group membership (classification) and for identifying the variables most relevant to group separation among a set of continuous correlated variables (description). In these and other disciplines, longitudinal data are often collected which provide additional temporal information. Linear classification methods for repeated measures data are more sensitive to actual group differences by taking the complex correlations between time points and variables into account, but are rarely discussed in the literature. Moreover, psychometric data rarely fulfill the multivariate normality assumption.</p>
		<p>The article introduces a dataset consisting of 160 one-minute affective video clips with normative values of valence and arousal. Each video was evaluated by 30 subjects, while each subject evaluated at least 20 videos. Compared to previous attempts to collect affective videos, the dataset has several advantages. Firstly, the high number of videos in different valence categories allows researchers to compile appropriate subsets for their studies. Secondly, the approximately equal and conventional duration of videos makes it possible to use them in psychophysiological studies applying EEG, fMRI, peripheral polygraphy, posturography, TMS, etc. Thirdly, the exclusion of sound or speech that might provoke culture-dependent interpretation makes the dataset useful in different cultures. The relationship between valence and arousal showed a typical quadratic pattern, with very negative and very positive videos receiving higher levels of arousal. Several negative videos received greater arousal scores than the most positive ones, reflecting negativity bias. The dataset encompasses more than 50 videos of different valence (negative, neutral, and positive ones). We believe that it will permit researchers to select corresponding subsamples of videos from different categories for their studies.</p>
	</abstract>
	<kwd-group kwd-group-type="author"><kwd>emotion</kwd><kwd>valence</kwd><kwd>arousal</kwd><kwd>video</kwd><kwd>database</kwd></kwd-group>
	
</article-meta>
</front>
<body>
	<sec sec-type="intro" id="intro"><title/>
<p>In the field of emotion research, affective stimuli in the form of images, words, and video clips are frequently employed. The availability of these stimuli sets is constantly increasing, leading to a vast number of options for researchers to choose from. This can pose a challenge for those seeking the most suitable stimuli for their studies. However, ecologically valid media-based stimuli, like video clips, can offer meaningful insights into the natural progression of mental processes, including emotion regulation and speech perception (<xref ref-type="bibr" rid="r14">Jääskeläinen &amp; Kosonogov, 2023</xref>). Additionally, research indicates that naturalistic paradigms, like presentation of video clips, music and spoken stories, may be more effective in capturing the attention of participants compared to event-related paradigms (<xref ref-type="bibr" rid="r3">Bezdek et al., 2017</xref>), and are preferable for emotion induction (<xref ref-type="bibr" rid="r16">Joseph et al., 2020</xref>; <xref ref-type="bibr" rid="r31">Siedlecka &amp; Denson, 2019</xref>).</p>
		<p>To our knowledge, the first affective video databases were introduced by <xref ref-type="bibr" rid="r28">Philippot (1993)</xref> and <xref ref-type="bibr" rid="r10">Gross and Levenson (1995)</xref>. After them, in order to enlarge such stimulus sets, many researchers began to compile their own. <xref ref-type="bibr" rid="r30">Schaefer et al. (2010)</xref> introduced FilmStim, a collection of 70 movie clips designed for the purpose of inducing emotional responses in experimental psychology studies. <xref ref-type="bibr" rid="r15">Jenkins and Andrewes (2012)</xref> presented a dataset of verbal and non-verbal contemporary films. The DEAP database, which was created by <xref ref-type="bibr" rid="r17">Koelstra et al. (2012)</xref>, is a publicly available database consisting of 120 one-minute-long musical video excerpts. At least 14 volunteers using an online self-assessment tool that measures the levels of arousal, valence, and dominance induced by each clip rated these excerpts. Another noteworthy database is the MAHNOB-HCI, which was released by <xref ref-type="bibr" rid="r33">Soleymani et al. (2012)</xref>. This multimodal database consists of 20 short emotional excerpts taken from commercially produced movies and video websites. The clips were carefully selected to represent a wide range of emotions and are accompanied by physiological data, including facial expressions and heart rate variability. <xref ref-type="bibr" rid="r5">Carvalho et al. (2012)</xref> also contributed to the field of emotional database creation by developing the Emotional Movie Database (EMDB). This database is composed of 52 non-auditory film clips, each lasting 40 seconds that were extracted from movies. The clips were selected to cover the entire affective space and were rated by 113 participants in terms of induced valence, arousal, and dominance using a nine-point scale. LIRIS-ACCEDE consists of 9,800 good quality video excerpts with a large content diversity (<xref ref-type="bibr" rid="r2">Baveye et al., 2015</xref>). Although they found a great amount of negative videos of high arousal, their positive videos induced largely “passive”, but not “active” arousal. In other words, this database formed a negative correlation between valence and arousal, which contradicts previous literature on the curvilinear relationship between these scales. Chieti Affective Action Videos is another database designed for the experimental study of emotions (<xref ref-type="bibr" rid="r6">Di Crosta et al., 2020</xref>). It consists of 360 videos of 15 seconds that depict only human actions, albeit in a good continuum of valence and arousal. “BioVid Emo DB” (<xref ref-type="bibr" rid="r37">Zhang et al., 2016</xref>) is another database, which contains not only self-reported data, but also skin conductance level, electrocardiogram, <italic>trapezius</italic> electromyogram of 86 subjects in response to affective videos.</p>
<p>In order to systematize all these properties and discrepancies between affective databases, <xref ref-type="bibr" rid="r7">Diconne et al. (2022)</xref> recently presented KAPODI, a searchable database of emotional stimuli sets in the tabular form. They found 24 databases of affective videos. However, only in nine of them were valence and arousal collected, and only six of them contained neutral videos. In two of the nine databases, 3 or 4 raters evaluated stimuli. Only in one database was the duration of videos fixed (2 s), while in others it ranged greatly (10 – 60 s or 25 – 161 s).</p>
<p>Overall, existing databases of emotion-eliciting video stimuli have different drawbacks. In many of them, the video duration varies, which does not permit researchers to select a sufficient number of equivalent videos (cropping videos is not appropriate since raters evaluated the whole videos in original databases). Speaking of cross-cultural studies, in many videos the voice is included that does not allow the use of it by scientists from other cultures. Though we admit that presenting the videos without audio can reduce the intensity of the emotions. In the current study, we decided to select silent (muted) video clips during preparation of the database. This was done in order to avoid cultural differences in the perception of voices and screams in different languages, situation nuances, faux pas and other language- or culture-specific affective factors.</p>
<p>We chose 1-minute excerpts in order to enrich stimulus material collection for neuroscientists. Thus, taking into account many central and peripheral variables, a one-minute epoch seems to fall into a range appropriate for many physiological systems. Thus, to begin with electroencephalography (EEG), the engagement index has been studied for epochs of one minute (<xref ref-type="bibr" rid="r24">Libert &amp; Van Hulle, 2019</xref>), as well as valence and arousal indices of EEG (<xref ref-type="bibr" rid="r36">Xu et al., 2023</xref>). Many attempts on emotion recognition via EEG have been done using one-minute excerpts (<xref ref-type="bibr" rid="r17">Koelstra et al., 2012</xref>). Different metrics of subject synchronization, for example, inter-subject correlation of EEG, could be calculated for an epoch lasting for 1 minute (<xref ref-type="bibr" rid="r8">Dmochowski et al., 2014</xref>). Likewise, 1-minute stimuli can be used for inter-subject correlation calculations in fMRI paradigms within a standard range of temporal resolution (<xref ref-type="bibr" rid="r11">Imhof et al., 2017</xref>). In a neuroimaging study, it is essential to have repeated emotional experiences, but, at the same time, the procedure is generally constrained by time. Therefore, researchers require sufficiently long video clips to evoke and assess emotions each time, while ensuring they have enough time to allow for multiple samples of the same emotion. That is why, we believe, one-minute video clips could resolve the problem of duration/repetition (for example, a one-hour study can number about 30 – 45 video clips, that is 10 – 15 of each valence category, which may be enough for many designs).</p>
<p>As for the autonomic nervous system, <xref ref-type="bibr" rid="r21">Kreibig (2010)</xref> found that 1-minute epochs are the most popular segment type in studies of emotional responses. For instance, 1-minute intervals were found to be long enough in order to measure cardiac activity. Thus, <xref ref-type="bibr" rid="r34">Takahashi et al. (2017)</xref> found that low and high frequencies of heart rate variability could be extracted from one-minute epochs. In a study of <xref ref-type="bibr" rid="r27">Nussinovitch et al. (2011)</xref>, RR intervals (intervals between two heartbeats) and root mean square of successive differences in RR intervals proved their reliability in 1-minute intervals. However, for such short periods, they did not recommend calculating standard deviation of the RR intervals and the proportion of intervals differing by 50 msec from the preceding interval. Skin temperature and skin conductance also could be studied for an interval of around 1 minute (<xref ref-type="bibr" rid="r19">Kosonogov et al., 2017</xref>). As for postural control, <xref ref-type="bibr" rid="r4">Carpenter et al. (2001)</xref> suggested that a sample duration of at least 60 s should be used to obtain a stable and reliable center of pressure characteristics.</p></sec>
<sec sec-type="methods"><title>Method</title>
<p>First, six professional psychologists browsed the Internet (youtube.com, vk.com, videezy.com) in order to find each one independently 45 video clips (15 negative, 15 neutral, and 15 positive), approximately of one minute (they watched videos without sound). Video clips were muted one-minute excerpts from fiction, educational and documentary films or home movies. They represented a broad spectrum of plots, such as surgeries, suffering animals, starvation, and fights (negative), household, street and industrial scenes (neutral) and landscapes, joyful children, sports, romantic comedies (positive). This set of 270 videos was then evaluated by them on a three-point scale (negative/neutral/positive). Of them, 175 videos received the 100% inter-rater reliability (the same valence was assigned by all six raters). We then randomly selected 160 video clips (55 negative, 53 neutral and 52 positive ones) which comprised the final database. The mean duration was 60.7 s, <italic>SD</italic> = 1.9 s, ranging in length from 46 to 65 seconds, 87% of videos were from 58 to 62 s, 74% of videos were from 59 to 61 s. As in many affective video databases, like DEAP, MANHOB-HCIU, and OPEN_EmoRec_II (<xref ref-type="bibr" rid="r29">Rukavina et al., 2015</xref>) and also in a moral video database (<xref ref-type="bibr" rid="r26">McCurrie et al., 2018</xref>), we intended to obtain at least 30 ratings (subjects) for each video clip. That is why, of these 160 video clips, we built eight separate video clip samples consisting of 20 clips each to present to the study subjects. In other words, each rater evaluated 20 video clips, while each video clip was assessed by 30 raters. Each of eight video clip sample contained supposedly pleasant, unpleasant and neutral clips in approximately equal proportion (from 6 to 8 of each valence, 20 in total). The order of video clip presentation was chosen at random, corrected to avoid consecutive presentation of more than two clips of the same emotion category (<xref ref-type="bibr" rid="r20">Kosonogov, 2020</xref>). Participants were asked to evaluate each video clip in terms of valence (where 1 meant <italic>Very negative</italic> and 9 meant <italic>Very pleasant</italic>) and arousal (where 1 meant “<italic>Very calm</italic>” and 9 meant “<italic>Very arousing</italic>”) immediately after the presentation of each video clip. There was a break of 12 seconds between video clip presentations to minimize carry-over effects. The experimental paradigm was created with the use of <italic>PsychoPy</italic> software and conducted using the <italic>Pavlovia</italic> platform (pavlovia.org).</p>
<p>From 20/09/2023 to 6/02/2024, 185 raters (74% female participants, <italic>M</italic><sub>age</sub> = 24.8 years, <italic>SD</italic> = 8.0 years) participated in the study. Participants were recruited through electronic advertisements on the <italic>VK</italic> social network (they read instructions in Russian language and reported to live in Russia). Fifty-three participants took part in the study twice and one participant — three times: all recurring participants evaluated different video clip samples to avoid reassessments. Between-subject design was implemented: each person was randomly assigned to the group evaluating one 20-clip sample. All expressed the informed consent, were warned about violent content and blood in the scenes and were granted an equivalent of 5.5 USD (at purchasing power parity). The university ethical committee approved the study (Nº 92, 19.09.2022).</p></sec>
<sec sec-type="results"><title>Results</title>
<p>The mean of valence was 4.99, <italic>SD</italic> = 1.60, min = 1.33, max = 7.80; while the mean of arousal was 5.13 <italic>SD</italic> = 0.98, min = 2.63, max = 8.0. Fifty-five videos resulted to be negative, mean valence = 3.09, <italic>SD</italic> = 0.65, mean arousal = 5.60, <italic>SD</italic> = 0.90; 53 were neutral, mean valence = 5.25, <italic>SD</italic> = 0.50, mean arousal = 4.39, <italic>SD</italic> = 0.83, and 52 were positive, mean valence = 6.72, <italic>SD</italic> = 0.48, mean arousal = 5.38, <italic>SD</italic> = 0.74. Apparently, the analysis of variance revealed that positive videos were more positive than neutral, and neutral were more positive than negative ones, <italic>F</italic>(2,104) = 542.6, <italic>p</italic> &lt; .001 (all post hoc test <italic>p</italic>s &lt; .001). What is more important, negative and positive videos provoked greater arousal than neutral ones, <italic>F</italic>(2,104) = 31.8, <italic>p</italic> &lt; .001, both post hoc test <italic>p</italic>s &lt; .001, but the arousal between negative and positive videos did not differ, post hoc test <italic>p</italic> = .37.</p>
<p>As for valence distribution, skewness (-0.27) and kurtosis (-1.05) were small, while a Kolmogorov-Smirnov test showed a non-normal distribution, <italic>W</italic> = .092, <italic>p</italic> &lt; .001. Arousal turned out to be distributed normally, skewness = -0.03, kurtosis = -0.13, <italic>d =</italic> .057, <italic>p =</italic> .53 (<xref ref-type="fig" rid="f1">Figure 1</xref>).</p><fig id="f1" position="anchor" fig-type="figure" orientation="portrait"><label>Figure 1</label><caption>
<title>The Distribution of (A) Valence and (B) Arousal of Affective Videos</title></caption><graphic xlink:href="ejop.14685-f1" position="anchor" orientation="portrait"/></fig>
<p>The relationship between valence and arousal showed a typical quadratic pattern, <italic>F</italic>(2,157) = 87.66, <italic>p</italic> &lt; .001, <italic>R</italic><sup>2</sup> = .52 (<xref ref-type="fig" rid="f2">Figure 2</xref>). The same patterns were observed for the male, <italic>F</italic>(2,157) = 57.31, <italic>p</italic> &lt; .001, <italic>R</italic><sup>2</sup> = .42, and the female subsamples, <italic>F</italic>(2,157) = 87.13, <italic>p</italic> &lt; .001, <italic>R</italic><sup>2</sup> = .52. The internal consistency was questionable for valence (.60), while very good for arousal (.90). Correlations between means and standard deviations were significant. Means and <italic>SD</italic>s of valence correlated negatively, <italic>r</italic> = -.19, <italic>p</italic> = .017, while in the case of arousal the correlation was positive, <italic>r</italic> = .45, <italic>p</italic> &lt; .001.</p><fig id="f2" position="anchor" fig-type="figure" orientation="portrait"><label>Figure 2</label><caption>
<title>The Relationship Between Valence and Arousal of Affective Videos for the Three Categories of All Subjects, Males, and Females</title></caption><graphic xlink:href="ejop.14685-f2" position="anchor" orientation="portrait"/></fig></sec>
<sec sec-type="discussion"><title>Discussion</title>
<p>The presented dataset provides normative values of valence and arousal of 160 one-minute affective video clips. Each of the videos was evaluated by 30 subjects, while each subject evaluated at least 20 videos. The advantages of our dataset, in comparison to previous attempts to collect affective videos, are the following. First, the high number of videos of different valence categories allows researchers to compile appropriate subsets for their studies. Second, the equal and conventional duration of videos makes possible the usage of these videos in psychophysiological studies of many types (EEG, fMRI, peripheral polygraphy, posturography, TMS, etc.). Third, we excluded any sound or speech which would provoke any culture-dependent interpretation (e.g., language jokes), thus making our database useful in different countries.</p>
<p>As in many previous studies, the relationship between valence and arousal showed a typical quadratic pattern, that is, very negative and very positive videos were assessed with higher levels of arousal. Neutral videos received the lowest arousal scores. However, we admit that the lowest value of arousal was 2.63 (1 being the theoretical minimum over the scale from 1 to 9), while, for example, in the E-MOVIE database the lowest value of arousal was 1.9 (<xref ref-type="bibr" rid="r25">Maffei &amp; Angrilli, 2019</xref>). We suppose that the mere situation of watching videos from different categories maintain the arousal level higher than 1.</p>
<p>Our videos were distributed along a quadratic curve, typical for valence-arousal relationship. Many other video databases also found this pattern (e.g., <xref ref-type="bibr" rid="r1">Ack Baraly et al., 2020</xref>). Curiously, some databases did not reveal a quadratic relationship, LIRIS-ACCEDE (<xref ref-type="bibr" rid="r2">Baveye et al., 2015</xref>) as well as in <xref ref-type="bibr" rid="r9">Gabert-Quillen et al. (2015)</xref>, valence and arousal also correlated negatively, meaning that pleasant videos were perceived as less arousing. In line with a broad spectrum of studies, conducted with other types of affective stimuli such as pictures (<xref ref-type="bibr" rid="r23">Lang et al., 2008</xref>), sounds (<xref ref-type="bibr" rid="r32">Soares et al., 2013</xref>), and odors (<xref ref-type="bibr" rid="r35">Toet et al., 2020</xref>), we believe that the quadratic pattern found in our study reflects the nature of the relationship between valence and arousal of affective videos.</p>
<p>It is worth noting that the internal consistency (Cronbach’s alpha) for valence was questionable, while it was very good for arousal. This may indicate the high ambivalence associated with some highly affective stimuli. Additionally, we highlight that the standard deviation for valence was higher compared to arousal, contrary to findings from E-MOVIE (<xref ref-type="bibr" rid="r25">Maffei &amp; Angrilli, 2019</xref>), where arousal displayed less variability than valence. In simpler terms, the videos in our dataset were generally seen as arousing stimuli, but their valence ratings were less certain on average. Some neutral video clips received high arousal ratings, which may reflect not the neutral nature of these stimuli, but rather a mixture of both negative and positive emotions. Future studies could employ not the unique valence scale (from negative to positive), but two distinct scales of negativity and positivity of each stimulus (<xref ref-type="bibr" rid="r12">Ito &amp; Cacioppo, 2005</xref>). Such an evaluation could identify ambivalent stimuli which can be important for researchers to select or avoid them, depending on the purposes. We also examined the correlation between means and standard deviations. There was a weak negative correlation between the means and <italic>SD</italic>s of valence, that is for negative videos <italic>SD</italic> was higher. The means and <italic>SD</italic>s of arousal displayed a moderate positive correlation, which aligns with the OASIS database (<xref ref-type="bibr" rid="r22">Kurdi et al., 2017</xref>).</p>
<p>Curiously, several negative videos received greater arousal scores than the most positive. Three negative videos provoked arousal scores greater than 7, while no positive video was perceived with an arousal score higher than 7. Also, the most negative videos were very close to the negative pole (valence &lt; 2), while there was no video which received a valence score greater than 8. This seems to reflect an effect, called negativity bias, (<xref ref-type="bibr" rid="r13">Ito et al., 1998</xref>) that lies in the fact that the most unpleasant pictures evoke greater emotional reactions than the most pleasant. In other words, threatening stimuli typically provoke faster and larger reactions because evolution processes favored those types of behavior which are related to survival, in comparison to behavior in response to neutral or pleasant stimuli.</p>
<p>Nevertheless, our database encompasses more than 50 videos of each category (negative, neutral, and positive). Taking into account that the duration of future experiments and the number of videos to be presented are limited, we believe our database will permit researchers to select corresponding subsamples of videos from different categories.</p>
</sec>
</body>
<back>
	
	<fn-group content-type="author-contribution">
		<fn fn-type="con">
			<p><italic>Vladimir Kosonogov</italic>: Conceptualisation, Methodology, Formal Analysis, Writing - original draft, Writing - review &amp; editing. <italic>Kirill Efimov</italic>: Software, Data Curation, Investigation, Writing - review &amp; editing. <italic>Olga Kuskova</italic>: Software, Data Curation, Investigation, Writing - review &amp; editing. <italic>Isak B. Blank</italic>: Conceptualisation, Methodology, Writing - review &amp; editing.
			</p>
		</fn>
	</fn-group>
	<sec sec-type="ethics-statement">
		<title>Ethics Statement</title>
		<p>The HSE University ethical committee approved the study (Nº 92, 19.09.2022). All participants expressed informed consent by launching the application and were warned before the beginning about violent content and blood in some videos.</p>
	</sec>
	
<ref-list><title>References</title>
	<ref id="r1"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Ack Baraly</surname>, <given-names>K. T.</given-names></string-name>, <string-name><surname>Muyingo</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Beaudoin</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Karami</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Langevin</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Davidson</surname>, <given-names>P. S. R.</given-names></string-name></person-group> (<year>2020</year>). <article-title>Database of Emotional Videos from Ottawa (DEVO)</article-title>. <source>Collabra: Psychology</source>, <volume>6</volume>(<issue>1</issue>), <elocation-id>10</elocation-id>. <pub-id pub-id-type="doi">10.1525/collabra.180</pub-id></mixed-citation></ref>
	<ref id="r2"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Baveye</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Dellandr&#x00E9;a</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Chamaret</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Chen</surname>, <given-names>L.</given-names></string-name></person-group> (<year>2015</year>). <article-title>LIRIS-ACCEDE: A video database for affective content analysis.</article-title> <source>IEEE Transactions on Affective Computing</source>, <volume>6</volume>(<issue>1</issue>), <fpage>43</fpage>&#x2013;<lpage>55</lpage>. <pub-id pub-id-type="doi">10.1109/TAFFC.2015.2396531</pub-id></mixed-citation></ref>
	<ref id="r3"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Bezdek</surname>, <given-names>M. A.</given-names></string-name>, <string-name><surname>Wenzel</surname>, <given-names>W. G.</given-names></string-name>, &amp; <string-name><surname>Schumacher</surname>, <given-names>E. H.</given-names></string-name></person-group> (<year>2017</year>). <article-title>The effect of visual and musical suspense on brain activation and memory during naturalistic viewing.</article-title> <source>Biological Psychology</source>, <volume>129</volume>, <fpage>73</fpage>&#x2013;<lpage>81</lpage>. <pub-id pub-id-type="doi">10.1016/j.biopsycho.2017.07.020</pub-id><pub-id pub-id-type="pmid">28764896</pub-id></mixed-citation></ref>
	<ref id="r4"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Carpenter</surname>, <given-names>M. G.</given-names></string-name>, <string-name><surname>Frank</surname>, <given-names>J. S.</given-names></string-name>, <string-name><surname>Winter</surname>, <given-names>D. A.</given-names></string-name>, &amp; <string-name><surname>Peysar</surname>, <given-names>G. W.</given-names></string-name></person-group> (<year>2001</year>). <article-title>Sampling duration effects on centre of pressure summary measures.</article-title> <source>Gait &amp; Posture</source>, <volume>13</volume>(<issue>1</issue>), <fpage>35</fpage>&#x2013;<lpage>40</lpage>. <pub-id pub-id-type="doi">10.1016/S0966-6362(00)00093-X</pub-id></mixed-citation></ref>
	<ref id="r5"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Carvalho</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Leite</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Galdo-&#x00C1;lvarez</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Gon&#x00E7;alves</surname>, <given-names>O. F.</given-names></string-name></person-group> (<year>2012</year>). <article-title>The Emotional Movie Database (EMDB): A self-report and psychophysiological study.</article-title> <source>Applied Psychophysiology and Biofeedback</source>, <volume>37</volume>(<issue>4</issue>), <fpage>279</fpage>&#x2013;<lpage>294</lpage>. <pub-id pub-id-type="doi">10.1007/s10484-012-9201-6</pub-id><pub-id pub-id-type="pmid">22767079</pub-id></mixed-citation></ref>
	<ref id="r6"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Di Crosta</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>La Malva</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Manna</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Marin</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Palumbo</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Verrocchio</surname>, <given-names>M. C.</given-names></string-name>, <string-name><surname>Cortini</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Mammarella</surname>, <given-names>N.</given-names></string-name>, &amp; <string-name><surname>Di Domenico</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2020</year>). <article-title>The Chieti Affective Action Videos database, a resource for the study of emotions in psychology.</article-title> <source>Scientific Data</source>, <volume>7</volume>(<issue>1</issue>), <elocation-id>32</elocation-id>. <pub-id pub-id-type="doi">10.1038/s41597-020-0366-1</pub-id><pub-id pub-id-type="pmid">31964894</pub-id></mixed-citation></ref>
	<ref id="r7"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Diconne</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Kountouriotis</surname>, <given-names>G. K.</given-names></string-name>, <string-name><surname>Paltoglou</surname>, <given-names>A. E.</given-names></string-name>, <string-name><surname>Parker</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Hostler</surname>, <given-names>T. J.</given-names></string-name></person-group> (<year>2022</year>). <article-title>Presenting KAPODI &#x2014; The searchable database of emotional stimuli sets.</article-title> <source>Emotion Review</source>, <volume>14</volume>(<issue>1</issue>), <fpage>84</fpage>&#x2013;<lpage>95</lpage>. <pub-id pub-id-type="doi">10.1177/17540739211072803</pub-id></mixed-citation></ref>
	<ref id="r8"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Dmochowski</surname>, <given-names>J. P.</given-names></string-name>, <string-name><surname>Bezdek</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Abelson</surname>, <given-names>B.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Audience preferences are predicted by temporal reliability of neural processing.</article-title> <source>Nature Communications</source>, <volume>5</volume>, <elocation-id>4567</elocation-id>. <pub-id pub-id-type="doi">10.1038/ncomms5567</pub-id><pub-id pub-id-type="pmid">25072833</pub-id></mixed-citation></ref>
	<ref id="r9"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Gabert-Quillen</surname>, <given-names>C. A.</given-names></string-name>, <string-name><surname>Bartolini</surname>, <given-names>E. E.</given-names></string-name>, <string-name><surname>Abravanel</surname>, <given-names>B. T.</given-names></string-name>, &amp; <string-name><surname>Sanislow</surname>, <given-names>C. A.</given-names></string-name></person-group> (<year>2015</year>). <article-title>Ratings for emotion film clips.</article-title> <source>Behavior Research Methods</source>, <volume>47</volume>(<issue>3</issue>), <fpage>773</fpage>&#x2013;<lpage>787</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-014-0504-z</pub-id></mixed-citation></ref>
	<ref id="r10"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Gross</surname>, <given-names>J. J.</given-names></string-name>, &amp; <string-name><surname>Levenson</surname>, <given-names>R. W.</given-names></string-name></person-group> (<year>1995</year>). <article-title>Emotion elicitation using films.</article-title> <source>Cognition and Emotion</source>, <volume>9</volume>(<issue>1</issue>), <fpage>87</fpage>&#x2013;<lpage>108</lpage>. <pub-id pub-id-type="doi">10.1080/02699939508408966</pub-id></mixed-citation></ref>
	<ref id="r11"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Imhof</surname>, <given-names>M. A.</given-names></string-name>, <string-name><surname>Schm&#x00E4;lzle</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Renner</surname>, <given-names>B.</given-names></string-name>, &amp; <string-name><surname>Schupp</surname>, <given-names>H. T.</given-names></string-name></person-group> (<year>2017</year>). <article-title>How real-life health messages engage our brains: Shared processing of effective anti-alcohol videos.</article-title> <source>Social Cognitive and Affective Neuroscience</source>, <volume>12</volume>(<issue>7</issue>), <fpage>1188</fpage>&#x2013;<lpage>1196</lpage>. <pub-id pub-id-type="doi">10.1093/scan/nsx044</pub-id><pub-id pub-id-type="pmid">28402568</pub-id></mixed-citation></ref>
	<ref id="r12"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Ito</surname>, <given-names>T.</given-names></string-name>, &amp; <string-name><surname>Cacioppo</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2005</year>). <article-title>Variations on a human universal: Individual differences in positivity offset and negativity bias.</article-title> <source>Cognition and Emotion</source>, <volume>19</volume>(<issue>1</issue>), <fpage>1</fpage>&#x2013;<lpage>26</lpage>. <pub-id pub-id-type="doi">10.1080/02699930441000120</pub-id></mixed-citation></ref>
	<ref id="r13"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Ito</surname>, <given-names>T. A.</given-names></string-name>, <string-name><surname>Larsen</surname>, <given-names>J. T.</given-names></string-name>, <string-name><surname>Smith</surname>, <given-names>N. K.</given-names></string-name>, &amp; <string-name><surname>Cacioppo</surname>, <given-names>J. T.</given-names></string-name></person-group> (<year>1998</year>). <article-title>Negative information weighs more heavily on the brain: The negativity bias in evaluative categorizations.</article-title> <source>Journal of Personality and Social Psychology</source>, <volume>75</volume>(<issue>4</issue>), <fpage>887</fpage>&#x2013;<lpage>900</lpage>. <pub-id pub-id-type="doi">10.1037/0022-3514.75.4.887</pub-id><pub-id pub-id-type="pmid">9825526</pub-id></mixed-citation></ref>
	<ref id="r14"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>J&#x00E4;&#x00E4;skel&#x00E4;inen</surname>, <given-names>I. P.</given-names></string-name>, &amp; <string-name><surname>Kosonogov</surname>, <given-names>V.</given-names></string-name></person-group> (<year>2023</year>). <article-title>Perspective taking in the human brain: Complementary evidence from neuroimaging studies with media-based naturalistic stimuli and artificial controlled paradigms.</article-title> <source>Frontiers in Human Neuroscience</source>, <volume>17</volume>, <elocation-id>1051934</elocation-id>. <pub-id pub-id-type="doi">10.3389/fnhum.2023.1051934</pub-id><pub-id pub-id-type="pmid">36875238</pub-id></mixed-citation></ref>
	<ref id="r15"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Jenkins</surname>, <given-names>L. M.</given-names></string-name>, &amp; <string-name><surname>Andrewes</surname>, <given-names>D. G.</given-names></string-name></person-group> (<year>2012</year>). <article-title>A new set of standardised verbal and nonverbal contemporary film stimuli for the elicitation of emotions.</article-title> <source>Brain Impairment</source>, <volume>13</volume>(<issue>2</issue>), <fpage>212</fpage>&#x2013;<lpage>227</lpage>. <pub-id pub-id-type="doi">10.1017/BrImp.2012.18</pub-id></mixed-citation></ref>
	<ref id="r16"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Joseph</surname>, <given-names>D. L.</given-names></string-name>, <string-name><surname>Chan</surname>, <given-names>M. Y.</given-names></string-name>, <string-name><surname>Heintzelman</surname>, <given-names>S. J.</given-names></string-name>, <string-name><surname>Tay</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Diener</surname>, <given-names>E.</given-names></string-name>, &amp; <string-name><surname>Scotney</surname>, <given-names>V. S.</given-names></string-name></person-group> (<year>2020</year>). <article-title>The manipulation of affect: A meta-analysis of affect induction procedures.</article-title> <source>Psychological Bulletin</source>, <volume>146</volume>(<issue>4</issue>), <fpage>355</fpage>&#x2013;<lpage>375</lpage>. <pub-id pub-id-type="doi">10.1037/bul0000224</pub-id><pub-id pub-id-type="pmid">31971408</pub-id></mixed-citation></ref>
	<ref id="r17"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Koelstra</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Muhl</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Soleymani</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Lee</surname>, <given-names>J. S.</given-names></string-name>, <string-name><surname>Yazdani</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Ebrahimi</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Pun</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Nijholt</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Patras</surname>, <given-names>I.</given-names></string-name></person-group> (<year>2012</year>). <article-title>DEAP: A database for emotion analysis; using physiological signals.</article-title> <source>IEEE Transactions on Affective Computing</source>, <volume>3</volume>(<issue>1</issue>), <fpage>18</fpage>&#x2013;<lpage>31</lpage>. <pub-id pub-id-type="doi">10.1109/T-AFFC.2011.15</pub-id></mixed-citation></ref>
	<ref id="r20"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Kosonogov</surname>, <given-names>V.</given-names></string-name></person-group> (<year>2020</year>). <article-title>The effects of the order of picture presentation on the subjective emotional evaluation of pictures.</article-title> <source>Psicologia</source>, <volume>34</volume>(<issue>2</issue>), <fpage>171</fpage>–<lpage>178</lpage>. <pub-id pub-id-type="doi">10.17575/psicologia.v34i2.1608</pub-id></mixed-citation></ref>
	<ref id="r18"><mixed-citation publication-type="web">Kosonogov, V. V. (2024). <italic>One-minute silent video clips: A database of valence and arousal</italic> [OSF project page containing study data and supplementary tables]. OSF. <ext-link ext-link-type="uri" xlink:href="https://osf.io/ejvf4/">https://osf.io/ejvf4/</ext-link></mixed-citation></ref>
	<ref id="r19"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Kosonogov</surname>, <given-names>V.</given-names></string-name>, <string-name name-style="western"><surname>De Zorzi</surname>, <given-names>L.</given-names></string-name>, <string-name name-style="western"><surname>Honoré</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Martínez-Velázquez</surname>, <given-names>E. S.</given-names></string-name>, <string-name name-style="western"><surname>Nandrino</surname>, <given-names>J. L.</given-names></string-name>, <string-name name-style="western"><surname>Martinez-Selva</surname>, <given-names>J. M.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Sequeira</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2017</year>). <article-title>Facial thermal variations: A new marker of emotional arousal.</article-title> <source>PLoS One</source>, <volume>12</volume>(<issue>9</issue>), <elocation-id>e0183592</elocation-id>. <pub-id pub-id-type="doi">10.1371/journal.pone.0183592</pub-id><pub-id pub-id-type="pmid">28922392</pub-id></mixed-citation></ref>

<ref id="r21"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Kreibig</surname>, <given-names>S. D.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Autonomic nervous system activity in emotion: A review.</article-title> <source>Biological Psychology</source>, <volume>84</volume>(<issue>3</issue>), <fpage>394</fpage>–<lpage>421</lpage>. <pub-id pub-id-type="doi">10.1016/j.biopsycho.2010.03.010</pub-id><pub-id pub-id-type="pmid">20371374</pub-id></mixed-citation></ref>
<ref id="r22"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Kurdi</surname>, <given-names>B.</given-names></string-name>, <string-name name-style="western"><surname>Lozano</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Banaji</surname>, <given-names>M. R.</given-names></string-name></person-group> (<year>2017</year>). <article-title>Introducing the Open Affective Standardized Image Set (OASIS).</article-title> <source>Behavior Research Methods</source>, <volume>49</volume>, <fpage>457</fpage>–<lpage>470</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-016-0715-3</pub-id><pub-id pub-id-type="pmid">26907748</pub-id></mixed-citation></ref>
<ref id="r23"><mixed-citation publication-type="other">Lang, P. J., Bradley, M. M., &amp; Cuthbert, B. N. (2008). <italic>International Affective Picture System (IAPS): Instruction manual and affective ratings</italic> (Technical Report A-8). Center for Research in Psychophysiology, University of Florida.</mixed-citation></ref>
	<ref id="r24"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Libert</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Van Hulle</surname>, <given-names>M. M.</given-names></string-name></person-group> (<year>2019</year>). <article-title>Predicting premature video skipping and viewer interest from EEG recordings.</article-title> <source>Entropy</source>, <volume>21</volume>(<issue>10</issue>), <elocation-id>1014</elocation-id>. <pub-id pub-id-type="doi">10.3390/e21101014</pub-id></mixed-citation></ref>
	<ref id="r25"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Maffei</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Angrilli</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2019</year>). <article-title>E-MOVIE — Experimental MOVies for induction of emotions in neuroscience: An innovative film database with normative data and sex differences.</article-title> <source>PLoS ONE</source>, <volume>14</volume>(<issue>10</issue>), <elocation-id>e0223124</elocation-id>. <pub-id pub-id-type="doi">10.1371/journal.pone.0223124</pub-id></mixed-citation></ref>
	<ref id="r26"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>McCurrie</surname>, <given-names>C. H.</given-names></string-name>, <string-name name-style="western"><surname>Crone</surname>, <given-names>D. L.</given-names></string-name>, <string-name name-style="western"><surname>Bigelow</surname>, <given-names>F.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Laham</surname>, <given-names>S. M.</given-names></string-name></person-group> (<year>2018</year>). <article-title>Moral and Affective Film Set (MAAFS): A normed moral video database.</article-title> <source>PLoS One</source>, <volume>13</volume>(<issue>11</issue>), <elocation-id>e0206604</elocation-id>. <pub-id pub-id-type="doi">10.1371/journal.pone.0206604</pub-id><pub-id pub-id-type="pmid">30427897</pub-id></mixed-citation></ref>
<ref id="r27"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Nussinovitch</surname>, <given-names>U.</given-names></string-name>, <string-name name-style="western"><surname>Elishkevitz</surname>, <given-names>K. P.</given-names></string-name>, <string-name name-style="western"><surname>Katz</surname>, <given-names>K.</given-names></string-name>, <string-name name-style="western"><surname>Nussinovitch</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Segev</surname>, <given-names>S.</given-names></string-name>, <string-name name-style="western"><surname>Volovitz</surname>, <given-names>B.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Nussinovitch</surname>, <given-names>N.</given-names></string-name></person-group> (<year>2011</year>). <article-title>Reliability of ultra-short ECG indices for heart rate variability.</article-title> <source>Annals of Noninvasive Electrocardiology</source>, <volume>16</volume>(<issue>2</issue>), <fpage>117</fpage>–<lpage>122</lpage>. <pub-id pub-id-type="doi">10.1111/j.1542-474X.2011.00417.x</pub-id><pub-id pub-id-type="pmid">21496161</pub-id></mixed-citation></ref>
<ref id="r28"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Philippot</surname>, <given-names>P.</given-names></string-name></person-group> (<year>1993</year>). <article-title>Inducing and assessing differentiated emotion-feeling states in the laboratory.</article-title> <source>Cognition and Emotion</source>, <volume>7</volume>(<issue>2</issue>), <fpage>171</fpage>–<lpage>193</lpage>. <pub-id pub-id-type="doi">10.1080/02699939308409183</pub-id><pub-id pub-id-type="pmid">27102736</pub-id></mixed-citation></ref>
<ref id="r29"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Rukavina</surname>, <given-names>S.</given-names></string-name>, <string-name name-style="western"><surname>Gruss</surname>, <given-names>S.</given-names></string-name>, <string-name name-style="western"><surname>Walter</surname>, <given-names>S.</given-names></string-name>, <string-name name-style="western"><surname>Hoffmann</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Traue</surname>, <given-names>H. C.</given-names></string-name></person-group> (<year>2015</year>). <article-title>OPEN_EmoRec_II — A multimodal corpus of human-computer interaction.</article-title> <source>International Journal of Computer, Electrical, Automation, Control and Information Engineering</source>, <volume>9</volume>(<issue>5</issue>), <fpage>1068</fpage>–<lpage>1074</lpage>.</mixed-citation></ref>
<ref id="r30"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Schaefer</surname>, <given-names>A.</given-names></string-name>, <string-name name-style="western"><surname>Nils</surname>, <given-names>F.</given-names></string-name>, <string-name name-style="western"><surname>Sanchez</surname>, <given-names>X.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Philippot</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Assessing the effectiveness of a large database of emotion-eliciting films: A new tool for emotion researchers.</article-title> <source>Cognition and Emotion</source>, <volume>24</volume>(<issue>7</issue>), <fpage>1153</fpage>–<lpage>1172</lpage>. <pub-id pub-id-type="doi">10.1080/02699930903274322</pub-id></mixed-citation></ref>
<ref id="r31"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Siedlecka</surname>, <given-names>E.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Denson</surname>, <given-names>T. F.</given-names></string-name></person-group> (<year>2019</year>). <article-title>Experimental methods for inducing basic emotions: A qualitative review.</article-title> <source>Emotion Review</source>, <volume>11</volume>(<issue>1</issue>), <fpage>87</fpage>–<lpage>97</lpage>. <pub-id pub-id-type="doi">10.1177/1754073917749016</pub-id></mixed-citation></ref>
<ref id="r32"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Soares</surname>, <given-names>A. P.</given-names></string-name>, <string-name name-style="western"><surname>Pinheiro</surname>, <given-names>A. P.</given-names></string-name>, <string-name name-style="western"><surname>Costa</surname>, <given-names>A.</given-names></string-name>, <string-name name-style="western"><surname>Frade</surname>, <given-names>C. S.</given-names></string-name>, <string-name name-style="western"><surname>Comesaña</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Pureza</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Affective auditory stimuli: Adaptation of the International Affective Digitized Sounds (IADS-2) for European Portuguese.</article-title> <source>Behavior Research Methods</source>, <volume>45</volume>(<issue>4</issue>), <fpage>1168</fpage>–<lpage>1181</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-012-0310-1</pub-id><pub-id pub-id-type="pmid">23526255</pub-id></mixed-citation></ref>
	<ref id="r33"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Soleymani</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Lichtenauer</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Pun</surname>, <given-names>T.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Pantic</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2012</year>). <article-title>A multimodal database for affect recognition and implicit tagging.</article-title> <source>IEEE Transactions on Affective Computing</source>, <volume>3</volume>(<issue>1</issue>), <fpage>42</fpage>–<lpage>55</lpage>. <pub-id pub-id-type="doi">10.1109/T-AFFC.2011.25</pub-id></mixed-citation></ref>
<ref id="r34"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Takahashi</surname>, <given-names>N.</given-names></string-name>, <string-name name-style="western"><surname>Kuriyama</surname>, <given-names>A.</given-names></string-name>, <string-name name-style="western"><surname>Kanazawa</surname>, <given-names>H.</given-names></string-name>, <string-name name-style="western"><surname>Takahashi</surname>, <given-names>Y.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Nakayama</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2017</year>). <article-title>Validity of spectral analysis based on heart rate variability from 1-minute or less ECG recordings.</article-title> <source>PACE — Pacing and Clinical Electrophysiology</source>, <volume>40</volume>(<issue>9</issue>), <fpage>1004</fpage>–<lpage>1009</lpage>. <pub-id pub-id-type="doi">10.1111/pace.13138</pub-id><pub-id pub-id-type="pmid">28594089</pub-id></mixed-citation></ref>
<ref id="r35"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Toet</surname>, <given-names>A.</given-names></string-name>, <string-name name-style="western"><surname>Eijsman</surname>, <given-names>S.</given-names></string-name>, <string-name name-style="western"><surname>Liu</surname>, <given-names>Y.</given-names></string-name>, <string-name name-style="western"><surname>Donker</surname>, <given-names>S.</given-names></string-name>, <string-name name-style="western"><surname>Kaneko</surname>, <given-names>D.</given-names></string-name>, <string-name name-style="western"><surname>Brouwer</surname>, <given-names>A.-M.</given-names></string-name>, &amp; <string-name name-style="western"><surname>van Erp</surname>, <given-names>J. B. F.</given-names></string-name></person-group> (<year>2020</year>). <article-title>The relation between valence and arousal in subjective odor experience.</article-title> <source>Chemosensory Perception</source>, <volume>13</volume>, <fpage>141</fpage>–<lpage>151</lpage>. <pub-id pub-id-type="doi">10.1007/s12078-019-09275-7</pub-id></mixed-citation></ref>
<ref id="r36"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Xu</surname>, <given-names>G.</given-names></string-name>, <string-name name-style="western"><surname>Guo</surname>, <given-names>W.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Wang</surname>, <given-names>Y.</given-names></string-name></person-group> (<year>2023</year>). <article-title>Subject-independent EEG emotion recognition with hybrid spatio-temporal GRU-Conv architecture.</article-title> <source>Medical &amp; Biological Engineering &amp; Computing</source>, <volume>61</volume>(<issue>1</issue>), <fpage>61</fpage>–<lpage>73</lpage>. <pub-id pub-id-type="doi">10.1007/s11517-022-02686-x</pub-id><pub-id pub-id-type="pmid">36322243</pub-id></mixed-citation></ref>
<ref id="r37"><mixed-citation publication-type="confproc">Zhang, L., Walter, S., Ma, X., Werner, P., Al-Hamadi, A., Traue, H. C., &amp; Gruss, S. (2016). “BioVid Emo DB”: A multimodal database for emotion analyses validated by subjective ratings, <italic>2016 IEEE Symposium Series on Computational Intelligence (SSCI)</italic> (pp. 1–6). Athens, Greece. <pub-id pub-id-type="doi">10.1109/SSCI.2016.7849931</pub-id></mixed-citation></ref>
</ref-list>
<bio id="bio1">
<p><bold>Vladimir Kosonogov</bold>, PhD, is the head of Affective Psychophysiology Laboratory at the Institute of Health Psychology, working on a broad spectrum of topics in affective psychophysiology.</p>
</bio>
<bio id="bio2">
<p><bold>Kirill Efimov</bold> is a junior researcher at the Institute for Cognitive Neuroscience, working on neuroimaging problems.</p>
</bio>
<bio id="bio3">
<p><bold>Olga Kuskova</bold> is a research assistant at the Institute for Cognitive Neuroscience.</p>
</bio>
<bio id="bio4">
<p><bold>Isak B. Blank</bold>, PhD, is a full professor at the Institute for Cognitive Neuroscience, working on neuroimaging and psychophysiology of emotion and cognition.</p>
</bio><fn-group><fn fn-type="financial-disclosure">
<p content-type="fn-title">The article was prepared within the framework of the Basic Research Program at HSE University.</p></fn><fn fn-type="conflict">
	<p content-type="fn-title">The authors have declared that no competing interests exist.</p></fn></fn-group>
	<sec sec-type="data-availability" id="das"><title>Data Availability</title>
		<p>The dataset for this study can be found at <xref ref-type="bibr" rid="r18">Kosonogov (2024)</xref>. S-Table 1 contains the averaged valence and arousal for each video with its duration and source. S-Table 2 contains the raw data of each rater. The videos can be found via links or upon request.</p>
	</sec>	
	
	
	<sec sec-type="supplementary-material" id="sp1"><title>Supplementary Materials</title>
		<table-wrap position="anchor">
			<table frame='void' style="background-#f3f3f3">
				<col width="60%" align="left"/>
				<col width="40%" align="left"/>
				<thead>
					<tr>
						<th>Type of supplementary materials</th>
						<th>Availability/Access</th>
					</tr>
				</thead>
				<tbody>
					<tr>
						<th colspan="2">Data</th>						
					</tr>
					<tr>
						<td>a. Silent video clips.</td>
						<td><xref ref-type="bibr" rid="r18">Kosonogov (2024)</xref></td>
					</tr>
					<tr>
						<td>b. S-Table 1 contains each video's averaged valence and arousal with duration and source.</td>
						<td><xref ref-type="bibr" rid="r18">Kosonogov (2024)</xref></td>
					</tr>
					<tr>
						<td>c. S-Table 2 contains raw data of each rater.</td>
						<td><xref ref-type="bibr" rid="r18">Kosonogov (2024)</xref></td>
					</tr>					
				</tbody>
			</table>
		</table-wrap>		
	</sec>		

<ack>
<p>The authors have no additional (i.e., non-financial) support to report.</p>
</ack>
</back>
</article>
