Research Reports

One-Minute Silent Video Clips: A Database of Valence and Arousal

Vladimir Kosonogov*1, Kirill Efimov2, Olga Kuskova2, Isak B. Blank2

Europe's Journal of Psychology, 2025, Vol. 21(3), 152–159, https://doi.org/10.5964/ejop.14685

Received: 2024-05-20. Accepted: 2025-04-11. Published (VoR): 2025-08-29.

Handling Editor: Alice Cancer, Università Cattolica del Sacro Cuore, Milan, Italy

*Corresponding author at: 190068, Griboyedova 123, Saint Petersburg, Russian Federation. E-mail: vkosonogov@hse.ru

This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Researchers in the behavioral and social sciences use linear discriminant analysis (LDA) for predictions of group membership (classification) and for identifying the variables most relevant to group separation among a set of continuous correlated variables (description). In these and other disciplines, longitudinal data are often collected which provide additional temporal information. Linear classification methods for repeated measures data are more sensitive to actual group differences by taking the complex correlations between time points and variables into account, but are rarely discussed in the literature. Moreover, psychometric data rarely fulfill the multivariate normality assumption.

The article introduces a dataset consisting of 160 one-minute affective video clips with normative values of valence and arousal. Each video was evaluated by 30 subjects, while each subject evaluated at least 20 videos. Compared to previous attempts to collect affective videos, the dataset has several advantages. Firstly, the high number of videos in different valence categories allows researchers to compile appropriate subsets for their studies. Secondly, the approximately equal and conventional duration of videos makes it possible to use them in psychophysiological studies applying EEG, fMRI, peripheral polygraphy, posturography, TMS, etc. Thirdly, the exclusion of sound or speech that might provoke culture-dependent interpretation makes the dataset useful in different cultures. The relationship between valence and arousal showed a typical quadratic pattern, with very negative and very positive videos receiving higher levels of arousal. Several negative videos received greater arousal scores than the most positive ones, reflecting negativity bias. The dataset encompasses more than 50 videos of different valence (negative, neutral, and positive ones). We believe that it will permit researchers to select corresponding subsamples of videos from different categories for their studies.

Keywords: emotion, valence, arousal, video, database

In the field of emotion research, affective stimuli in the form of images, words, and video clips are frequently employed. The availability of these stimuli sets is constantly increasing, leading to a vast number of options for researchers to choose from. This can pose a challenge for those seeking the most suitable stimuli for their studies. However, ecologically valid media-based stimuli, like video clips, can offer meaningful insights into the natural progression of mental processes, including emotion regulation and speech perception (Jääskeläinen & Kosonogov, 2023). Additionally, research indicates that naturalistic paradigms, like presentation of video clips, music and spoken stories, may be more effective in capturing the attention of participants compared to event-related paradigms (Bezdek et al., 2017), and are preferable for emotion induction (Joseph et al., 2020; Siedlecka & Denson, 2019).

To our knowledge, the first affective video databases were introduced by Philippot (1993) and Gross and Levenson (1995). After them, in order to enlarge such stimulus sets, many researchers began to compile their own. Schaefer et al. (2010) introduced FilmStim, a collection of 70 movie clips designed for the purpose of inducing emotional responses in experimental psychology studies. Jenkins and Andrewes (2012) presented a dataset of verbal and non-verbal contemporary films. The DEAP database, which was created by Koelstra et al. (2012), is a publicly available database consisting of 120 one-minute-long musical video excerpts. At least 14 volunteers using an online self-assessment tool that measures the levels of arousal, valence, and dominance induced by each clip rated these excerpts. Another noteworthy database is the MAHNOB-HCI, which was released by Soleymani et al. (2012). This multimodal database consists of 20 short emotional excerpts taken from commercially produced movies and video websites. The clips were carefully selected to represent a wide range of emotions and are accompanied by physiological data, including facial expressions and heart rate variability. Carvalho et al. (2012) also contributed to the field of emotional database creation by developing the Emotional Movie Database (EMDB). This database is composed of 52 non-auditory film clips, each lasting 40 seconds that were extracted from movies. The clips were selected to cover the entire affective space and were rated by 113 participants in terms of induced valence, arousal, and dominance using a nine-point scale. LIRIS-ACCEDE consists of 9,800 good quality video excerpts with a large content diversity (Baveye et al., 2015). Although they found a great amount of negative videos of high arousal, their positive videos induced largely “passive”, but not “active” arousal. In other words, this database formed a negative correlation between valence and arousal, which contradicts previous literature on the curvilinear relationship between these scales. Chieti Affective Action Videos is another database designed for the experimental study of emotions (Di Crosta et al., 2020). It consists of 360 videos of 15 seconds that depict only human actions, albeit in a good continuum of valence and arousal. “BioVid Emo DB” (Zhang et al., 2016) is another database, which contains not only self-reported data, but also skin conductance level, electrocardiogram, trapezius electromyogram of 86 subjects in response to affective videos.

In order to systematize all these properties and discrepancies between affective databases, Diconne et al. (2022) recently presented KAPODI, a searchable database of emotional stimuli sets in the tabular form. They found 24 databases of affective videos. However, only in nine of them were valence and arousal collected, and only six of them contained neutral videos. In two of the nine databases, 3 or 4 raters evaluated stimuli. Only in one database was the duration of videos fixed (2 s), while in others it ranged greatly (10 – 60 s or 25 – 161 s).

Overall, existing databases of emotion-eliciting video stimuli have different drawbacks. In many of them, the video duration varies, which does not permit researchers to select a sufficient number of equivalent videos (cropping videos is not appropriate since raters evaluated the whole videos in original databases). Speaking of cross-cultural studies, in many videos the voice is included that does not allow the use of it by scientists from other cultures. Though we admit that presenting the videos without audio can reduce the intensity of the emotions. In the current study, we decided to select silent (muted) video clips during preparation of the database. This was done in order to avoid cultural differences in the perception of voices and screams in different languages, situation nuances, faux pas and other language- or culture-specific affective factors.

We chose 1-minute excerpts in order to enrich stimulus material collection for neuroscientists. Thus, taking into account many central and peripheral variables, a one-minute epoch seems to fall into a range appropriate for many physiological systems. Thus, to begin with electroencephalography (EEG), the engagement index has been studied for epochs of one minute (Libert & Van Hulle, 2019), as well as valence and arousal indices of EEG (Xu et al., 2023). Many attempts on emotion recognition via EEG have been done using one-minute excerpts (Koelstra et al., 2012). Different metrics of subject synchronization, for example, inter-subject correlation of EEG, could be calculated for an epoch lasting for 1 minute (Dmochowski et al., 2014). Likewise, 1-minute stimuli can be used for inter-subject correlation calculations in fMRI paradigms within a standard range of temporal resolution (Imhof et al., 2017). In a neuroimaging study, it is essential to have repeated emotional experiences, but, at the same time, the procedure is generally constrained by time. Therefore, researchers require sufficiently long video clips to evoke and assess emotions each time, while ensuring they have enough time to allow for multiple samples of the same emotion. That is why, we believe, one-minute video clips could resolve the problem of duration/repetition (for example, a one-hour study can number about 30 – 45 video clips, that is 10 – 15 of each valence category, which may be enough for many designs).

As for the autonomic nervous system, Kreibig (2010) found that 1-minute epochs are the most popular segment type in studies of emotional responses. For instance, 1-minute intervals were found to be long enough in order to measure cardiac activity. Thus, Takahashi et al. (2017) found that low and high frequencies of heart rate variability could be extracted from one-minute epochs. In a study of Nussinovitch et al. (2011), RR intervals (intervals between two heartbeats) and root mean square of successive differences in RR intervals proved their reliability in 1-minute intervals. However, for such short periods, they did not recommend calculating standard deviation of the RR intervals and the proportion of intervals differing by 50 msec from the preceding interval. Skin temperature and skin conductance also could be studied for an interval of around 1 minute (Kosonogov et al., 2017). As for postural control, Carpenter et al. (2001) suggested that a sample duration of at least 60 s should be used to obtain a stable and reliable center of pressure characteristics.

Method

First, six professional psychologists browsed the Internet (youtube.com, vk.com, videezy.com) in order to find each one independently 45 video clips (15 negative, 15 neutral, and 15 positive), approximately of one minute (they watched videos without sound). Video clips were muted one-minute excerpts from fiction, educational and documentary films or home movies. They represented a broad spectrum of plots, such as surgeries, suffering animals, starvation, and fights (negative), household, street and industrial scenes (neutral) and landscapes, joyful children, sports, romantic comedies (positive). This set of 270 videos was then evaluated by them on a three-point scale (negative/neutral/positive). Of them, 175 videos received the 100% inter-rater reliability (the same valence was assigned by all six raters). We then randomly selected 160 video clips (55 negative, 53 neutral and 52 positive ones) which comprised the final database. The mean duration was 60.7 s, SD = 1.9 s, ranging in length from 46 to 65 seconds, 87% of videos were from 58 to 62 s, 74% of videos were from 59 to 61 s. As in many affective video databases, like DEAP, MANHOB-HCIU, and OPEN_EmoRec_II (Rukavina et al., 2015) and also in a moral video database (McCurrie et al., 2018), we intended to obtain at least 30 ratings (subjects) for each video clip. That is why, of these 160 video clips, we built eight separate video clip samples consisting of 20 clips each to present to the study subjects. In other words, each rater evaluated 20 video clips, while each video clip was assessed by 30 raters. Each of eight video clip sample contained supposedly pleasant, unpleasant and neutral clips in approximately equal proportion (from 6 to 8 of each valence, 20 in total). The order of video clip presentation was chosen at random, corrected to avoid consecutive presentation of more than two clips of the same emotion category (Kosonogov, 2020). Participants were asked to evaluate each video clip in terms of valence (where 1 meant Very negative and 9 meant Very pleasant) and arousal (where 1 meant “Very calm” and 9 meant “Very arousing”) immediately after the presentation of each video clip. There was a break of 12 seconds between video clip presentations to minimize carry-over effects. The experimental paradigm was created with the use of PsychoPy software and conducted using the Pavlovia platform (pavlovia.org).

From 20/09/2023 to 6/02/2024, 185 raters (74% female participants, Mage = 24.8 years, SD = 8.0 years) participated in the study. Participants were recruited through electronic advertisements on the VK social network (they read instructions in Russian language and reported to live in Russia). Fifty-three participants took part in the study twice and one participant — three times: all recurring participants evaluated different video clip samples to avoid reassessments. Between-subject design was implemented: each person was randomly assigned to the group evaluating one 20-clip sample. All expressed the informed consent, were warned about violent content and blood in the scenes and were granted an equivalent of 5.5 USD (at purchasing power parity). The university ethical committee approved the study (Nº 92, 19.09.2022).

Results

The mean of valence was 4.99, SD = 1.60, min = 1.33, max = 7.80; while the mean of arousal was 5.13 SD = 0.98, min = 2.63, max = 8.0. Fifty-five videos resulted to be negative, mean valence = 3.09, SD = 0.65, mean arousal = 5.60, SD = 0.90; 53 were neutral, mean valence = 5.25, SD = 0.50, mean arousal = 4.39, SD = 0.83, and 52 were positive, mean valence = 6.72, SD = 0.48, mean arousal = 5.38, SD = 0.74. Apparently, the analysis of variance revealed that positive videos were more positive than neutral, and neutral were more positive than negative ones, F(2,104) = 542.6, p < .001 (all post hoc test ps < .001). What is more important, negative and positive videos provoked greater arousal than neutral ones, F(2,104) = 31.8, p < .001, both post hoc test ps < .001, but the arousal between negative and positive videos did not differ, post hoc test p = .37.

As for valence distribution, skewness (-0.27) and kurtosis (-1.05) were small, while a Kolmogorov-Smirnov test showed a non-normal distribution, W = .092, p < .001. Arousal turned out to be distributed normally, skewness = -0.03, kurtosis = -0.13, d = .057, p = .53 (Figure 1).

Click to enlarge
ejop.14685-f1
Figure 1

The Distribution of (A) Valence and (B) Arousal of Affective Videos

The relationship between valence and arousal showed a typical quadratic pattern, F(2,157) = 87.66, p < .001, R2 = .52 (Figure 2). The same patterns were observed for the male, F(2,157) = 57.31, p < .001, R2 = .42, and the female subsamples, F(2,157) = 87.13, p < .001, R2 = .52. The internal consistency was questionable for valence (.60), while very good for arousal (.90). Correlations between means and standard deviations were significant. Means and SDs of valence correlated negatively, r = -.19, p = .017, while in the case of arousal the correlation was positive, r = .45, p < .001.

Click to enlarge
ejop.14685-f2
Figure 2

The Relationship Between Valence and Arousal of Affective Videos for the Three Categories of All Subjects, Males, and Females

Discussion

The presented dataset provides normative values of valence and arousal of 160 one-minute affective video clips. Each of the videos was evaluated by 30 subjects, while each subject evaluated at least 20 videos. The advantages of our dataset, in comparison to previous attempts to collect affective videos, are the following. First, the high number of videos of different valence categories allows researchers to compile appropriate subsets for their studies. Second, the equal and conventional duration of videos makes possible the usage of these videos in psychophysiological studies of many types (EEG, fMRI, peripheral polygraphy, posturography, TMS, etc.). Third, we excluded any sound or speech which would provoke any culture-dependent interpretation (e.g., language jokes), thus making our database useful in different countries.

As in many previous studies, the relationship between valence and arousal showed a typical quadratic pattern, that is, very negative and very positive videos were assessed with higher levels of arousal. Neutral videos received the lowest arousal scores. However, we admit that the lowest value of arousal was 2.63 (1 being the theoretical minimum over the scale from 1 to 9), while, for example, in the E-MOVIE database the lowest value of arousal was 1.9 (Maffei & Angrilli, 2019). We suppose that the mere situation of watching videos from different categories maintain the arousal level higher than 1.

Our videos were distributed along a quadratic curve, typical for valence-arousal relationship. Many other video databases also found this pattern (e.g., Ack Baraly et al., 2020). Curiously, some databases did not reveal a quadratic relationship, LIRIS-ACCEDE (Baveye et al., 2015) as well as in Gabert-Quillen et al. (2015), valence and arousal also correlated negatively, meaning that pleasant videos were perceived as less arousing. In line with a broad spectrum of studies, conducted with other types of affective stimuli such as pictures (Lang et al., 2008), sounds (Soares et al., 2013), and odors (Toet et al., 2020), we believe that the quadratic pattern found in our study reflects the nature of the relationship between valence and arousal of affective videos.

It is worth noting that the internal consistency (Cronbach’s alpha) for valence was questionable, while it was very good for arousal. This may indicate the high ambivalence associated with some highly affective stimuli. Additionally, we highlight that the standard deviation for valence was higher compared to arousal, contrary to findings from E-MOVIE (Maffei & Angrilli, 2019), where arousal displayed less variability than valence. In simpler terms, the videos in our dataset were generally seen as arousing stimuli, but their valence ratings were less certain on average. Some neutral video clips received high arousal ratings, which may reflect not the neutral nature of these stimuli, but rather a mixture of both negative and positive emotions. Future studies could employ not the unique valence scale (from negative to positive), but two distinct scales of negativity and positivity of each stimulus (Ito & Cacioppo, 2005). Such an evaluation could identify ambivalent stimuli which can be important for researchers to select or avoid them, depending on the purposes. We also examined the correlation between means and standard deviations. There was a weak negative correlation between the means and SDs of valence, that is for negative videos SD was higher. The means and SDs of arousal displayed a moderate positive correlation, which aligns with the OASIS database (Kurdi et al., 2017).

Curiously, several negative videos received greater arousal scores than the most positive. Three negative videos provoked arousal scores greater than 7, while no positive video was perceived with an arousal score higher than 7. Also, the most negative videos were very close to the negative pole (valence < 2), while there was no video which received a valence score greater than 8. This seems to reflect an effect, called negativity bias, (Ito et al., 1998) that lies in the fact that the most unpleasant pictures evoke greater emotional reactions than the most pleasant. In other words, threatening stimuli typically provoke faster and larger reactions because evolution processes favored those types of behavior which are related to survival, in comparison to behavior in response to neutral or pleasant stimuli.

Nevertheless, our database encompasses more than 50 videos of each category (negative, neutral, and positive). Taking into account that the duration of future experiments and the number of videos to be presented are limited, we believe our database will permit researchers to select corresponding subsamples of videos from different categories.

Funding

The article was prepared within the framework of the Basic Research Program at HSE University.

Acknowledgments

The authors have no additional (i.e., non-financial) support to report.

Competing Interests

The authors have declared that no competing interests exist.

Author Contributions

Vladimir Kosonogov: Conceptualisation, Methodology, Formal Analysis, Writing - original draft, Writing - review & editing. Kirill Efimov: Software, Data Curation, Investigation, Writing - review & editing. Olga Kuskova: Software, Data Curation, Investigation, Writing - review & editing. Isak B. Blank: Conceptualisation, Methodology, Writing - review & editing.

Ethics Statement

The HSE University ethical committee approved the study (Nº 92, 19.09.2022). All participants expressed informed consent by launching the application and were warned before the beginning about violent content and blood in some videos.

Data Availability

The dataset for this study can be found at Kosonogov (2024). S-Table 1 contains the averaged valence and arousal for each video with its duration and source. S-Table 2 contains the raw data of each rater. The videos can be found via links or upon request.

Supplementary Materials

Type of supplementary materialsAvailability/Access
Data
a. Silent video clips.Kosonogov (2024)
b. S-Table 1 contains each video's averaged valence and arousal with duration and source.Kosonogov (2024)
c. S-Table 2 contains raw data of each rater.Kosonogov (2024)

References

  • Ack Baraly, K. T., Muyingo, L., Beaudoin, C., Karami, S., Langevin, M., & Davidson, P. S. R. (2020). Database of Emotional Videos from Ottawa (DEVO). Collabra: Psychology, 6(1), Article 10. https://doi.org/10.1525/collabra.180

  • Baveye, Y., Dellandréa, E., Chamaret, C., & Chen, L. (2015). LIRIS-ACCEDE: A video database for affective content analysis. IEEE Transactions on Affective Computing, 6(1), 43-55. https://doi.org/10.1109/TAFFC.2015.2396531

  • Bezdek, M. A., Wenzel, W. G., & Schumacher, E. H. (2017). The effect of visual and musical suspense on brain activation and memory during naturalistic viewing. Biological Psychology, 129, 73-81. https://doi.org/10.1016/j.biopsycho.2017.07.020

  • Carpenter, M. G., Frank, J. S., Winter, D. A., & Peysar, G. W. (2001). Sampling duration effects on centre of pressure summary measures. Gait & Posture, 13(1), 35-40. https://doi.org/10.1016/S0966-6362(00)00093-X

  • Carvalho, S., Leite, J., Galdo-Álvarez, S., & Gonçalves, O. F. (2012). The Emotional Movie Database (EMDB): A self-report and psychophysiological study. Applied Psychophysiology and Biofeedback, 37(4), 279-294. https://doi.org/10.1007/s10484-012-9201-6

  • Di Crosta, A., La Malva, P., Manna, C., Marin, A., Palumbo, R., Verrocchio, M. C., Cortini, M., Mammarella, N., & Di Domenico, A. (2020). The Chieti Affective Action Videos database, a resource for the study of emotions in psychology. Scientific Data, 7(1), Article 32. https://doi.org/10.1038/s41597-020-0366-1

  • Diconne, K., Kountouriotis, G. K., Paltoglou, A. E., Parker, A., & Hostler, T. J. (2022). Presenting KAPODI — The searchable database of emotional stimuli sets. Emotion Review, 14(1), 84-95. https://doi.org/10.1177/17540739211072803

  • Dmochowski, J. P., Bezdek, M., & Abelson, B. (2014). Audience preferences are predicted by temporal reliability of neural processing. Nature Communications, 5, Article 4567. https://doi.org/10.1038/ncomms5567

  • Gabert-Quillen, C. A., Bartolini, E. E., Abravanel, B. T., & Sanislow, C. A. (2015). Ratings for emotion film clips. Behavior Research Methods, 47(3), 773-787. https://doi.org/10.3758/s13428-014-0504-z

  • Gross, J. J., & Levenson, R. W. (1995). Emotion elicitation using films. Cognition and Emotion, 9(1), 87-108. https://doi.org/10.1080/02699939508408966

  • Imhof, M. A., Schmälzle, R., Renner, B., & Schupp, H. T. (2017). How real-life health messages engage our brains: Shared processing of effective anti-alcohol videos. Social Cognitive and Affective Neuroscience, 12(7), 1188-1196. https://doi.org/10.1093/scan/nsx044

  • Ito, T., & Cacioppo, J. (2005). Variations on a human universal: Individual differences in positivity offset and negativity bias. Cognition and Emotion, 19(1), 1-26. https://doi.org/10.1080/02699930441000120

  • Ito, T. A., Larsen, J. T., Smith, N. K., & Cacioppo, J. T. (1998). Negative information weighs more heavily on the brain: The negativity bias in evaluative categorizations. Journal of Personality and Social Psychology, 75(4), 887-900. https://doi.org/10.1037/0022-3514.75.4.887

  • Jääskeläinen, I. P., & Kosonogov, V. (2023). Perspective taking in the human brain: Complementary evidence from neuroimaging studies with media-based naturalistic stimuli and artificial controlled paradigms. Frontiers in Human Neuroscience, 17, Article 1051934. https://doi.org/10.3389/fnhum.2023.1051934

  • Jenkins, L. M., & Andrewes, D. G. (2012). A new set of standardised verbal and nonverbal contemporary film stimuli for the elicitation of emotions. Brain Impairment, 13(2), 212-227. https://doi.org/10.1017/BrImp.2012.18

  • Joseph, D. L., Chan, M. Y., Heintzelman, S. J., Tay, L., Diener, E., & Scotney, V. S. (2020). The manipulation of affect: A meta-analysis of affect induction procedures. Psychological Bulletin, 146(4), 355-375. https://doi.org/10.1037/bul0000224

  • Koelstra, S., Muhl, C., Soleymani, M., Lee, J. S., Yazdani, A., Ebrahimi, T., Pun, T., Nijholt, A., & Patras, I. (2012). DEAP: A database for emotion analysis; using physiological signals. IEEE Transactions on Affective Computing, 3(1), 18-31. https://doi.org/10.1109/T-AFFC.2011.15

  • Kosonogov, V. (2020). The effects of the order of picture presentation on the subjective emotional evaluation of pictures. Psicologia, 34(2), 171-178. https://doi.org/10.17575/psicologia.v34i2.1608

  • Kosonogov, V. V. (2024). One-minute silent video clips: A database of valence and arousal [OSF project page containing study data and supplementary tables]. OSF. https://osf.io/ejvf4/

  • Kosonogov, V., De Zorzi, L., Honoré, J., Martínez-Velázquez, E. S., Nandrino, J. L., Martinez-Selva, J. M., & Sequeira, H. (2017). Facial thermal variations: A new marker of emotional arousal. PLoS One, 12(9), Article e0183592. https://doi.org/10.1371/journal.pone.0183592

  • Kreibig, S. D. (2010). Autonomic nervous system activity in emotion: A review. Biological Psychology, 84(3), 394-421. https://doi.org/10.1016/j.biopsycho.2010.03.010

  • Kurdi, B., Lozano, S., & Banaji, M. R. (2017). Introducing the Open Affective Standardized Image Set (OASIS). Behavior Research Methods, 49, 457-470. https://doi.org/10.3758/s13428-016-0715-3

  • Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (2008). International Affective Picture System (IAPS): Instruction manual and affective ratings (Technical Report A-8). Center for Research in Psychophysiology, University of Florida.

  • Libert, A., & Van Hulle, M. M. (2019). Predicting premature video skipping and viewer interest from EEG recordings. Entropy, 21(10), Article 1014. https://doi.org/10.3390/e21101014

  • Maffei, A., & Angrilli, A. (2019). E-MOVIE — Experimental MOVies for induction of emotions in neuroscience: An innovative film database with normative data and sex differences. PLoS ONE, 14(10), Article e0223124. https://doi.org/10.1371/journal.pone.0223124

  • McCurrie, C. H., Crone, D. L., Bigelow, F., & Laham, S. M. (2018). Moral and Affective Film Set (MAAFS): A normed moral video database. PLoS One, 13(11), Article e0206604. https://doi.org/10.1371/journal.pone.0206604

  • Nussinovitch, U., Elishkevitz, K. P., Katz, K., Nussinovitch, M., Segev, S., Volovitz, B., & Nussinovitch, N. (2011). Reliability of ultra-short ECG indices for heart rate variability. Annals of Noninvasive Electrocardiology, 16(2), 117-122. https://doi.org/10.1111/j.1542-474X.2011.00417.x

  • Philippot, P. (1993). Inducing and assessing differentiated emotion-feeling states in the laboratory. Cognition and Emotion, 7(2), 171-193. https://doi.org/10.1080/02699939308409183

  • Rukavina, S., Gruss, S., Walter, S., Hoffmann, H., & Traue, H. C. (2015). OPEN_EmoRec_II — A multimodal corpus of human-computer interaction. International Journal of Computer, Electrical, Automation, Control and Information Engineering, 9(5), 1068-1074.

  • Schaefer, A., Nils, F., Sanchez, X., & Philippot, P. (2010). Assessing the effectiveness of a large database of emotion-eliciting films: A new tool for emotion researchers. Cognition and Emotion, 24(7), 1153-1172. https://doi.org/10.1080/02699930903274322

  • Siedlecka, E., & Denson, T. F. (2019). Experimental methods for inducing basic emotions: A qualitative review. Emotion Review, 11(1), 87-97. https://doi.org/10.1177/1754073917749016

  • Soares, A. P., Pinheiro, A. P., Costa, A., Frade, C. S., Comesaña, M., & Pureza, R. (2013). Affective auditory stimuli: Adaptation of the International Affective Digitized Sounds (IADS-2) for European Portuguese. Behavior Research Methods, 45(4), 1168-1181. https://doi.org/10.3758/s13428-012-0310-1

  • Soleymani, M., Lichtenauer, J., Pun, T., & Pantic, M. (2012). A multimodal database for affect recognition and implicit tagging. IEEE Transactions on Affective Computing, 3(1), 42-55. https://doi.org/10.1109/T-AFFC.2011.25

  • Takahashi, N., Kuriyama, A., Kanazawa, H., Takahashi, Y., & Nakayama, T. (2017). Validity of spectral analysis based on heart rate variability from 1-minute or less ECG recordings. PACE — Pacing and Clinical Electrophysiology, 40(9), 1004-1009. https://doi.org/10.1111/pace.13138

  • Toet, A., Eijsman, S., Liu, Y., Donker, S., Kaneko, D., Brouwer, A.-M., & van Erp, J. B. F. (2020). The relation between valence and arousal in subjective odor experience. Chemosensory Perception, 13, 141-151. https://doi.org/10.1007/s12078-019-09275-7

  • Xu, G., Guo, W., & Wang, Y. (2023). Subject-independent EEG emotion recognition with hybrid spatio-temporal GRU-Conv architecture. Medical & Biological Engineering & Computing, 61(1), 61-73. https://doi.org/10.1007/s11517-022-02686-x

  • Zhang, L., Walter, S., Ma, X., Werner, P., Al-Hamadi, A., Traue, H. C., & Gruss, S. (2016). “BioVid Emo DB”: A multimodal database for emotion analyses validated by subjective ratings, 2016 IEEE Symposium Series on Computational Intelligence (SSCI) (pp. 1–6). Athens, Greece. https://doi.org/10.1109/SSCI.2016.7849931

About the Authors

Vladimir Kosonogov, PhD, is the head of Affective Psychophysiology Laboratory at the Institute of Health Psychology, working on a broad spectrum of topics in affective psychophysiology.

Kirill Efimov is a junior researcher at the Institute for Cognitive Neuroscience, working on neuroimaging problems.

Olga Kuskova is a research assistant at the Institute for Cognitive Neuroscience.

Isak B. Blank, PhD, is a full professor at the Institute for Cognitive Neuroscience, working on neuroimaging and psychophysiology of emotion and cognition.