<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article
  PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD with MathML3 v1.2 20190208//EN" "JATS-journalpublishing1-mathml3.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" article-type="research-article" dtd-version="1.2" xml:lang="en">
<front>
<journal-meta><journal-id journal-id-type="publisher-id">EJOP</journal-id><journal-id journal-id-type="nlm-ta">Eur J Psychol</journal-id>
<journal-title-group>
<journal-title>Europe's Journal of Psychology</journal-title><abbrev-journal-title abbrev-type="pubmed">Eur. J. Psychol.</abbrev-journal-title>
</journal-title-group>
<issn pub-type="epub">1841-0413</issn>
<publisher><publisher-name>PsychOpen</publisher-name></publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">ejop.14957</article-id>
<article-id pub-id-type="doi">10.5964/ejop.14957</article-id>
<article-categories>
<subj-group subj-group-type="heading"><subject>Theoretical Contributions</subject></subj-group>

<subj-group subj-group-type="badge">
<subject>Code</subject>
<subject>Materials</subject>
</subj-group>

</article-categories>
<title-group>
<article-title>In Search of the Lost Interaction: A Theoretical and Methodological Framework for Researching Interactions</article-title>
<alt-title alt-title-type="right-running">Interactions and PPV</alt-title>
<alt-title specific-use="APA-reference-style" xml:lang="en">In search of the lost interaction: A theoretical and methodological framework for researching interactions</alt-title>
</title-group>
<contrib-group>
	<contrib contrib-type="author" corresp="yes"><name name-style="western"><surname>Schweizer</surname><given-names>Geoffrey</given-names></name><xref ref-type="corresp" rid="cor1">*</xref><xref ref-type="aff" rid="aff1"><sup>1</sup></xref></contrib>
<contrib contrib-type="author"><name name-style="western"><surname>Köppel</surname><given-names>Maximilian</given-names></name><xref ref-type="aff" rid="aff2"><sup>2</sup></xref></contrib>
<contrib contrib-type="editor">
<name>
<surname>Williams</surname>
<given-names>Matt</given-names>
</name>
<xref ref-type="aff" rid="aff3"/>
</contrib>
	<aff id="aff1"><label>1</label><institution content-type="dept">Department of Sport and Exercise Psychology</institution>, <institution>University of Heidelberg</institution>, <addr-line><city>Heidelberg</city></addr-line>, <country country="DE">Germany</country></aff>
	<aff id="aff2"><label>2</label><institution content-type="dept">Department of Medical Oncology</institution>, <institution>National Center for Tumor Diseases, Heidelberg University Hospital</institution>, <addr-line><city>Heidelberg</city></addr-line>, <country country="DE">Germany</country></aff>
	<aff id="aff3">Massey University, Auckland, <country>New Zealand</country></aff>
</contrib-group>
<author-notes>
	<corresp id="cor1"><label>*</label>Im Neuenheimer Feld 720, 69120 Heidelberg, Germany. Tel.: +49 6221 546033. <email xlink:href="geoffrey.schweizer@issw.uni-heidelberg.de">geoffrey.schweizer@issw.uni-heidelberg.de</email></corresp>
</author-notes>
<pub-date date-type="pub" publication-format="electronic"><day>29</day><month>08</month><year>2025</year></pub-date>
	<pub-date pub-type="collection" publication-format="electronic"><year>2025</year></pub-date>
<volume>21</volume>
<issue>3</issue>
<fpage>249</fpage>
<lpage>262</lpage>
<history>
<date date-type="received">
<day>02</day>
<month>07</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>18</day>
<month>06</month>
<year>2025</year>
</date>
</history>
<permissions><copyright-year>2025</copyright-year><copyright-holder>Schweizer &amp; Köppel</copyright-holder><license license-type="open-access" specific-use="CC BY 4.0" xlink:href="https://creativecommons.org/licenses/by/4.0/"><ali:license_ref>https://creativecommons.org/licenses/by/4.0/</ali:license_ref><license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution (CC BY) 4.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p></license></permissions>
<abstract>
<p>We suggest that psychological research into interaction effects might benefit from analyzing potential interactions from the perspective of the Positive Predictive Value (PPV). The PPV denotes the post-study probability that a claimed effect is true, based on the pre-study probability that said effect exists, the power of the respective test and the significance level used for testing. We use the PPV in order to propose a framework structuring potential interaction effects based on their (theoretical) plausibility and their shape. Specifically, the position of a hypothesized interaction in the proposed framework may inform sample-size planning and the choice of alpha levels prior to a study; and it may inform confidence into results after a study. Finally, we present a heuristic approach for planning research on interactions based on R (the pre-study probability that an effect exists), the PPV (the post-study probability that a claimed effect is true) and α (the significance level used for significance testing). In doing so, we aim to provide a nuanced view on the feasibility of investigating into interactional hypotheses, a view that is critical where needed but that at the same time does not discourage research on interactions.</p>
</abstract>
<kwd-group kwd-group-type="author"><kwd>replicability</kwd><kwd>positive predictive value</kwd><kwd>power</kwd><kwd>prior probability</kwd><kwd>pattern of means</kwd></kwd-group>

</article-meta>
</front>
<body>
	<sec sec-type="intro" id="intro"><title/>
<p>Recent replication studies suggest that interactions may have poorer replicability than main effects (<xref ref-type="bibr" rid="r2">Altmejd et al., 2019</xref>; <xref ref-type="bibr" rid="r28">OSC, 2015</xref>). This difference is so large that it emerges as one of the most important predictors of replicability (<xref ref-type="bibr" rid="r2">Altmejd et al., 2019</xref>). At the same time, several authors caution that testing at least certain kinds of interaction effects may be more difficult than researchers often assume, due to a need for large sample sizes (<xref ref-type="bibr" rid="r16">Gelman et al., 2021</xref>; <xref ref-type="bibr" rid="r21">Lakens, 2020</xref>; <xref ref-type="bibr" rid="r32">Simonsohn, 2014</xref>; <xref ref-type="bibr" rid="r34">Sommet et al., 2023</xref>). Given that interactions can be considered central for psychological research, these warnings appear to be worrisome for at least two reasons: First, because they suggest that effects that are central to psychological theorizing may be harder to find and less robust than was previously thought. Second, because they might lead to the unintended consequence that researchers avoid examining interaction effects, because they consider it to be infeasible from an economical perspective.</p>
<p>The goal of the present paper is to investigate potential reasons for interactions’ low replicability, both reasons already presented in the existing literature and potentially novel ones. We propose that the Positive Predictive Value (PPV) constitutes a useful framework for assessing and integrating different factors contributing to interactions’ potentially low replicability, including, but not limited to statistical power when testing for interactions. The PPV denotes the post-study probability that a claimed effect is true, based on the pre-study probability that said effect exists, the power of the respective test and the significance level used for testing (<xref ref-type="bibr" rid="r5">Button et al., 2013</xref>; <xref ref-type="bibr" rid="r19">Ioannidis, 2005</xref>; <xref ref-type="bibr" rid="r37">Wilson &amp; Wixted, 2018</xref>). Furthermore, we aim to present a framework structuring potential interactions based on their (theoretical) plausibility and their shape, both of which may not only affect interactions’ replicability, but also be relevant when planning original studies and when interpreting their results. Finally, we present a heuristic approach that supports researchers in planning research on interactions based on R (the pre-study probability that an effect exists), the PPV (the post-study probability that a claimed effect is true) and α (the significance level used for significance testing). With this endeavor, we hope to provide a nuanced view on the feasibility of investigating interactional hypotheses, a view that is critical where needed but that at the same time does not discourage all research on interactions.<xref ref-type="fn" rid="fn1"><sup>1</sup></xref><fn id="fn1"><label>1</label>
<p>In a highly insightful paper, <xref ref-type="bibr" rid="r29">Rohrer and Arslan (2021)</xref> discuss several theoretical and methodological issues with interactions. However, they focus on “reliably mistaken conclusions” (<xref ref-type="bibr" rid="r29">Rohrer &amp; Arslan, 2021</xref>, p. 1). That is, in contrast to the issues discussed in the present paper, the issues discussed by Rohrer and Arslan are associated with effects that can be replicated, but that lead to the wrong conclusions regarding the research question.</p></fn></p></sec>
<sec sec-type="other1"><title>The Replicability of Interaction Effects</title>
<p>In 2019, Altmejd and colleagues investigated the question which study characteristics can be used in order to predict replication success via black-box statistical models. They employed data from four replication projects, namely the Reproducibility Project in Psychology (RPP; <xref ref-type="bibr" rid="r28">OSC, 2015</xref>), the Experimental Economics Replication Project (EERP; <xref ref-type="bibr" rid="r6">Camerer et al., 2016</xref>), Many Labs 1 (ML 1; <xref ref-type="bibr" rid="r20">Klein et al., 2014</xref>) and Many Labs 3 (ML 3; <xref ref-type="bibr" rid="r8">Ebersole et al., 2016</xref>), totaling 131 attempts at replicating empirical effects (for more details on the data used by <xref ref-type="bibr" rid="r2">Altmejd et al., 2019</xref> see Appendix A in <xref ref-type="bibr" rid="r30">Schweizer &amp; Köppel, 2025a</xref>). After training several models via machine-learning algorithms, Altmejd and colleagues identified variables predicting replication success. One of the variables most strongly predicting replication success (after <italic>p</italic>-values and effect sizes) is “whether central tests describe interactions between variables or (single-variable) main effects” (<xref ref-type="bibr" rid="r2">Altmejd et al., 2019</xref>, p. 11). In their data, “eight of 41 interaction effect studies replicated, while 48 of the 90 other studies did” (<xref ref-type="bibr" rid="r2">Altmejd et al., 2019</xref>, p. 11).<xref ref-type="fn" rid="fn2"><sup>2</sup></xref><fn id="fn2"><label>2</label>
<p>Of these 41 interaction effects, 37 come from the RPP, 3 from ML 3, and 1 one from the SSRP (Social Sciences Replication Project; <xref ref-type="bibr" rid="r7">Camerer et al., 2018</xref>). Data from the SSRP were not used for setting up the model, however they were used for validating the model via out-of-sample prediction.</p></fn> In other words, whereas roughly half of the investigated main effects replicated, only about one fifth of interactions did.</p>
	<p><xref ref-type="bibr" rid="r2">Altmejd and colleagues (2019)</xref> provide three tentative explanations for the lower replication success of interactions. First, they point out that interactions may be “slippery statistically” (<xref ref-type="bibr" rid="r2">Altmejd et al., 2019</xref>, p. 11), as they are subject to measurement error in more than one variable. Second, they speculate that interactions are particularly likely to be the result of <italic>p</italic>-hacking. For example, when researchers do not find a hypothesized main effect in the whole sample, they may look for the respective finding in different subsamples (e.g., based on gender or personality) until they find it in one. Third, they point out that underpowered research is less likely to replicate (e.g., <xref ref-type="bibr" rid="r13">Fraley &amp; Vazire, 2014</xref>) and that there is reason to believe that studies testing interactions may have on average lower power than studies testing main effects.</p>
<p>We are convinced that the third explanation in particular offered by Altmejd and colleagues (namely that some interactions did not replicate because the original research was underpowered and thus claimed effects were false) holds potential to explain why interactions are substantially less likely to replicate than main effects. This is because in the meantime, several authors have shown that research on interaction effects may require larger sample sizes in order to achieve sufficient power than researchers might have been aware of (<xref ref-type="bibr" rid="r3">Blake &amp; Gangestad, 2020</xref>; <xref ref-type="bibr" rid="r21">Lakens, 2020</xref>; <xref ref-type="bibr" rid="r34">Sommet et al., 2023</xref>). Still, we agree with <xref ref-type="bibr" rid="r2">Altmejd and colleagues (2019</xref>, p. 11) when they conclude that “the replicability difference is striking and merits further study”. It is in this vein that we propose looking at the replicability of interactions from the perspective of the Positive Predictive Value (PPV). We consider the PPV to be helpful for this end as, a) it offers a perspective that unifies several influential concepts in one equation, b) can be linked to both planning studies and interpreting study results, and c) can be utilized to predict and understand the replicability of study results. Furthermore, the PPV combines theoretical and methodological considerations, thus bridging a gap between theory of science and methodology.</p></sec>
<sec sec-type="other1"><title>The Positive Predictive Value (PPV)</title>
<p>The PPV is defined as the post-study probability that a claimed effect is true (<xref ref-type="bibr" rid="r5">Button et al., 2013</xref>; <xref ref-type="bibr" rid="r19">Ioannidis, 2005</xref>; <xref ref-type="bibr" rid="r37">Wilson &amp; Wixted, 2018</xref>). In the context of NHST this usually means the probability that a significant effect is true. The PPV is defined according to <xref ref-type="disp-formula" rid="e1">Equation 1</xref> (see below) and is a simple application of Bayes’ Theorem:</p><disp-formula id="e1">PPV = ([1 – β] x R) / ([1 – β] x R + α),<label>1</label></disp-formula>
<p><italic>R</italic> is the pre-study probability that an effect exists. (1 – β) is the studies’ power and α is the significance level used for significance testing. From a Bayesian perspective, the PPV equals the probability that a particular Hypothesis H is true given the statistical test reaches statistical significance (i.e., <italic>p</italic> &lt; alpha) <italic>p</italic>(H = True | <italic>p</italic> &lt; alpha); the power (1 – β) of the study equals the likelihood p(D|H = True); which is the probability to observe the data given the hypothesis is true; <italic>R</italic> resembles the prior probability that the hypothesis is true <italic>p</italic>(H = True); and finally α can be described as <italic>p</italic>(D|H0 = True), i.e. the probability of the data given the effect is not existent. Therefore, the denominator, [1 – β] x R + α, is the marginal probability of observing any positive test result regardless of whether it is a true or a false positive <italic>p</italic>(D), which is necessary in order to normalize the PPV within a range between 0 and 1.</p>
<p>Another way to illustrate the PPV is to consider statistical tests analogous to diagnostic tests, where the existence of an effect would resemble the existence of a disease. In this case, the PPV is the probability that a person is actually sick when they received a positive test result. Statistical power (1 – β) would be the sensitivity of the test, i.e., the probability the diagnostic identifies the disease if the tested person is indeed sick. The prior probability would equal the prevalence of the disease in the population, i.e., the unconditional probability that a random person has this particular disease; and finally, α is the probability for a false-positive, i.e., that the diagnostic identifies an apparently healthy patient as sick (in diagnostic terms 1 minus the specificity of the diagnostic). Therefore, the denominator would be the totality of positive rest results.</p>
<p><italic>R</italic> is defined as the base rate of true effects among the population of investigated effects in a given field (<xref ref-type="bibr" rid="r37">Wilson &amp; Wixted, 2018</xref>). This parameter is also called “the unconditional probability of a true effect” (<xref ref-type="bibr" rid="r27">Miller &amp; Ulrich, 2016</xref>, p. 666). That is, when categorizing study outcomes as true positives, false positives, true negatives and false negatives, <italic>R</italic> equals the sum of the probabilities of true positives and false negatives (<xref ref-type="bibr" rid="r27">Miller &amp; Ulrich, 2016</xref>).</p>
<p><italic>Power</italic> is defined as the probability of finding an effect of a certain size or larger given that it exists, or as correctly rejecting a null hypothesis (<xref ref-type="bibr" rid="r5">Button et al., 2013</xref>). Power depends on effect sizes (the larger the effect size, the higher the power), sample sizes (the larger the sample, the higher the power), α (the lower alpha, the lower the power) and research designs (some research designs provide more power than others).</p>
	<p>From the formula presented above, it follows that for a given <italic>R</italic> and given α the lower the power the lower the PPV (<xref ref-type="bibr" rid="r5">Button et al., 2013</xref>; <xref ref-type="bibr" rid="r37">Wilson &amp; Wixted, 2018</xref>). Likewise, it follows that for a given power and given α the lower <italic>R</italic> the lower the PPV. Thus, two studies claiming an effect with the same power and the same level of significance may have different probabilities of said effects being true, given they had different <italic>R</italic>s. In Appendix B (see <xref ref-type="bibr" rid="r30">Schweizer &amp; Köppel, 2025a</xref>) we present the results of some simulations showing how the PPV changes depending on different values of <italic>R</italic> for given levels of (1 – β) and α: please see <xref ref-type="bibr" rid="r31">Schweizer and Köppel (2025b)</xref> for the code used for the simulations. These simulations demonstrate a well-known yet consequential effect, namely that the effect of <italic>R</italic> on the PPV is higher when power is lower (and vice versa).</p>
<p>From the PPV it follows that fields with higher power and higher <italic>R</italic>s have more true findings than fields with lower power and lower <italic>R</italic>s (<xref ref-type="bibr" rid="r37">Wilson &amp; Wixted, 2018</xref>). As true effects are more likely to replicate than false effects (assuming that researchers employ dependable methodology), the former fields should have higher replicability than the latter ones (<xref ref-type="bibr" rid="r37">Wilson &amp; Wixted, 2018</xref>). In the next chapter we show how the PPV can be used in order to gain an improved understanding of interaction effects and their respective replicabilities. We will do so by focusing on the PPV’s different components as they relate to interaction effects. First, we will discuss power considerations and sample size requirements regarding interaction effects. Second, we will discuss considerations regarding interaction effects’ prior probabilities of being true (i.e., their <italic>R</italic>s). In a third step, we will combine both considerations in a single framework. Next, we will offer some suggestions for researchers planning studies involving interactional hypotheses.</p></sec>
<sec sec-type="other1"><title>Interaction Effects From the Perspective of the PPV</title>
<sec><title>Power and Sample Size Requirements for Testing Interaction Effects</title>
<p>From the perspective of the PPV, power is not only defined as the probability of finding an effect of a certain size given that it exists, but it also influences a finding’s post-study probability of being true. However, what is known about power when testing for interaction effects? Recently, several authors have argued that depending on the nature of an interaction, large sample sizes may be required in order to achieve sufficient power (<xref ref-type="bibr" rid="r3">Blake &amp; Gangestad, 2020</xref>; <xref ref-type="bibr" rid="r21">Lakens, 2020</xref>; <xref ref-type="bibr" rid="r32">Simonsohn, 2014</xref>; <xref ref-type="bibr" rid="r34">Sommet et al., 2023</xref>). Furthermore, these authors have argued that many psychologists may not be aware of the real sample-size requirements when testing for interaction effects.</p>
<p>Generally, as with all kinds of effects, power, and thus sample size depends on the size of an interaction’s effect.<xref ref-type="fn" rid="fn3"><sup>3</sup></xref><fn id="fn3"><label>3</label>
<p>Generally, the considerations presented in this manuscript apply to both categorical variables (e.g., an independent variable in an experimental design) and continuous variables (e.g., age as a moderator). For sake of simplification, many authors refer to categorical variables in a 2 x 2 design (e.g., <xref ref-type="bibr" rid="r21">Lakens, 2020</xref>; <xref ref-type="bibr" rid="r34">Sommet et al., 2023</xref>). Particularly when referring to their work, we adopted this practice.</p></fn> In order to understand sample-size requirements for testing for an interaction, it is necessary to distinguish between different kinds of interaction effects based on their shape, but also on their ‘function’ (<xref ref-type="bibr" rid="r3">Blake &amp; Gangestad, 2020</xref>; <xref ref-type="bibr" rid="r21">Lakens, 2020</xref>; <xref ref-type="bibr" rid="r32">Simonsohn, 2014</xref>; <xref ref-type="bibr" rid="r34">Sommet et al., 2023</xref>) (<xref ref-type="fig" rid="f1">Figure 1</xref>). Regarding their <italic>shape</italic>, one can distinguish between disordinal and ordinal interactions. A disordinal interaction (also called cross-over interaction) “occurs when the group with the larger mean switches over” (<xref ref-type="bibr" rid="r21">Lakens, 2020</xref>, p. 3; see also <xref ref-type="bibr" rid="r34">Sommet et al., 2023</xref>). In ordinal interactions, “the mean of one group is always higher than the mean of the other group” (<xref ref-type="bibr" rid="r21">Lakens, 2020</xref>, p. 3; see also <xref ref-type="bibr" rid="r34">Sommet et al., 2023</xref>). A special kind of ordinal interactions are called attenuated interactions, which by definition serve a certain <italic>function</italic>. “Attenuated interactions [….] characterize situations where the effect of a moderator is to reduce or eliminate, but not reverse, the main effect” (<xref ref-type="bibr" rid="r3">Blake &amp; Gangestad, 2020</xref>). Although attenuated interactions are ordinal interactions when looking at their shape, there is an additional element in their definition: They are defined in relation to a main effect, that they are supposed to reduce (partially attenuated interaction) or eliminate (fully attenuated interaction).</p><fig id="f1" position="anchor" fig-type="figure" orientation="portrait"><label>Figure 1</label><caption>
<title>An Example of a Disordinal and an Example of a Partially Attenuated Interaction in a 2 x 2 Design</title></caption><graphic xlink:href="ejop.14957-f1" position="anchor" orientation="portrait"/></fig>
<p>Different authors have approached interaction effects’ power requirements from different perspectives. <xref ref-type="bibr" rid="r34">Sommet et al. (2023)</xref> try to arrive at the most general recommendations based on generic assumptions regarding the plausibility of different effect sizes for interactions. <xref ref-type="bibr" rid="r34">Sommet et al. (2023)</xref> assume that disordinal interactions typically have the same effect size as a median-sized main effect, that fully attenuated interactions typically have half the size of a median-sized main effect (and thus of a typical disordinal interaction), and that partially attenuated interactions typically have between one fourth and one fifth the size of a median-sized main effect (and thus of a typical disordinal interaction). Based on these assumptions they calculate the sample sizes required for finding interaction effects of the respective sizes.<xref ref-type="fn" rid="fn4"><sup>4</sup></xref><fn id="fn4"><label>4</label>
<p>For details on these calculations and the underlying formulas please see <xref ref-type="bibr" rid="r34">Sommet et al. (2023)</xref>.</p></fn> They show that (when accepting the assumptions regarding typical sizes), in order to obtain a power of .80, researchers need 256 participants for finding a typical disordinal interaction, 1024 participants for finding a typical fully attenuated interaction and 5575 participants for finding a typical partially attenuated interaction. They go on to conduct a metastudy suggesting that most psychological studies fall short of these sample sizes. One potential limitation of these conclusions is that Sommet et al. might underestimate the size of typical interactions (<xref ref-type="bibr" rid="r34">Sommet et al., 2023</xref>). The following two approaches circumvent this limitation by not making recommendations based on general assumptions regarding effect sizes, but by basing their recommendations on more specific comparisons.</p>
<p>Both <xref ref-type="bibr" rid="r32">Simonsohn (2014)</xref> and <xref ref-type="bibr" rid="r3">Blake and Gangestad (2020)</xref> avoid general assumptions regarding interactions’ effect sizes and instead compare interaction effects to main effects in the particular case of attenuated interactions. As attenuated interactions are defined in relation to a main effect, we can determine the sample size that is needed in order to find the attenuated interaction effect with the same power as the respective main effect, without making any assumptions about effect sizes of (ordinal) interactions in general. Based on both mathematical derivations and simulations, <xref ref-type="bibr" rid="r32">Simonsohn (2014)</xref> and <xref ref-type="bibr" rid="r34">Sommet et al. (2023)</xref> show that when researchers need a specific sample size for finding a certain main effect with a certain power, then they need four times that sample size for finding the respective <italic>fully attenuated</italic> interaction with the same power. When they are looking for a <italic>partially attenuated</italic> interaction, then the sample size needs to be even larger as compared to the sample size needed for finding the main effect (<xref ref-type="bibr" rid="r32">Simonsohn, 2014</xref>; <xref ref-type="bibr" rid="r34">Sommet et al., 2023</xref>). To the extent that published original effects did not follow these rather challenging sample size requirements, they were not adequately powered and are thus less likely to replicate. <xref ref-type="bibr" rid="r3">Blake and Gangestad (2020)</xref> provide further illustrations of attenuated interactions’ need for large sample sizes using real-world examples from published studies.</p>
<p><xref ref-type="bibr" rid="r21">Lakens (2020)</xref> compares ordinal and disordinal interactions “where the largest simple comparison has the same effect size”. Simple comparisons refer to the effects of one independent variable separately for the levels of the second independent variable. Regarding the shape of interaction effects (or their “pattern of means”, <xref ref-type="bibr" rid="r21">Lakens, 2020</xref>), Lakens shows that “<italic>in two studies where the largest simple comparison has the same effect size</italic>, a study with a disordinal interaction has much higher power than a study with an ordinal interaction”. In a specific example, <xref ref-type="bibr" rid="r21">Lakens (2020)</xref> compares two hypothetical studies with the same sample sizes, however one with an ordinal and one with a disordinal interaction. In both studies, the largest simple comparison has the same size. In this example, the power for finding the disordinal interaction is nearly three times as high as the power for finding the ordinal interaction. It follows that when planning sample sizes for finding interaction effects, researchers are well advised to take into account the shape of the respective interaction.</p>
<p>However, it would be wrong to conclude that interactions always require large sample sizes in order to be detected with sufficient power. First, as noted above, power, and thus sample size depends on the size of the respective effect. Thus, if researchers can reasonably assume a large effect size, a correspondingly small sample size will be required. Second, as shown by <xref ref-type="bibr" rid="r21">Lakens (2020)</xref>, effect sizes for disordinal interactions are larger than effect sizes for ordinal interactions, given a largest simple comparison of the same size. Thus, when a theory allows researchers to hypothesize a disordinal interaction, they need a smaller sample size in order to obtain a certain power than when the theory allows hypothesizing an ordinal interaction, again given a largest simple comparison of the same size.</p>
<p>In past publications, thinking about these issues has sometimes been obfuscated, because, a) researchers referred to interactions without specifying their size or their type, or b) because researchers explicitly or implicitly assumed that interaction effects can never be larger than main effects, or c) because researchers only referred to ordinal interactions in their considerations or examples (<xref ref-type="bibr" rid="r21">Lakens, 2020</xref>). For example, <xref ref-type="bibr" rid="r16">Gelman and colleagues (2021</xref>, p. 301) use an ordinal interaction of either the same size or half the size of a main effect when discussing why “[i]interactions are harder to find than main effects”. Likewise, Maxwell and colleagues assume that interactions are often small and ordinal (<xref ref-type="bibr" rid="r26">Maxwell et al., 2018</xref>). Thus, it is sometimes assumed that researchers always need large samples in order to detect interaction effects, because interaction effects must be smaller than main effects, either per se or because they are attenuating interactions. Having looked at interactions from the perspective of power and sample size planning, we will now turn to the next element of the PPV, namely their pre-study probabilities of being true.</p></sec>
<sec><title>Interactions’ Pre-Study Probabilities of Being True (R)</title>
<p>Hypothesized interaction effects, like all other hypothesized effects, have different pre-study probabilities of being true (e.g., <xref ref-type="bibr" rid="r5">Button et al., 2013</xref>; <xref ref-type="bibr" rid="r37">Wilson &amp; Wixted, 2018</xref>). In other words, they vary continuously on a dimension from zero (low <italic>R</italic>) to one (high <italic>R</italic>). Whereas <italic>R</italic> as a theoretical parameter has a clear definition (i.e., the base rate of true effects among all effects tested in a field), it is somewhat less clear which factors affect <italic>R</italic> and how researchers are supposed to estimate <italic>R</italic>. In a very general way, <italic>R</italic> depends on “established knowledge” (<xref ref-type="bibr" rid="r37">Wilson &amp; Wixted, 2018</xref>, p. 191): The more prior knowledge we have in a specific field, the less false effects get subjected to empirical tests, and thus, the base rate of true effects among all effects tested in this field increases. In a more specific way, <italic>R</italic> is supposed to be higher to the extent that a hypothesis, a) “is guided by detailed, quantitative and well-supported theories” (<xref ref-type="bibr" rid="r27">Miller &amp; Ulrich, 2016</xref>, p. 685), b) is more strongly supported by previous evidence, and c) is based on common experience (<xref ref-type="bibr" rid="r27">Miller &amp; Ulrich, 2016</xref>; <xref ref-type="bibr" rid="r37">Wilson &amp; Wixted, 2018</xref>).</p>
<p>Thus, interactions can be considered to be towards the low-<italic>R</italic> end of the dimension when they are, a) based on weaker theorizing, b) less strongly supported by previous research, and c) less in line with common experience. Interactions can be considered to be towards the high-<italic>R</italic> end of the dimension when they are, a) based on stronger theorizing, b) more strongly supported by previous research, and c) more in line with common experience. For example, when researchers find a main effect and then test for an interaction of their main effect with gender, without there being a theoretical reason to do so or prior research reporting such an interaction, their hypothesized effect would have a low prior probability of being true (i.e., it constitutes a low-<italic>R</italic> interaction). However, when researchers have a theoretical reason to expect an interaction effect, or when substantial prior research has reported similar effects, then their hypothesized effect would have a higher prior probability of being true (i.e., it constitutes a high-<italic>R</italic> interaction).</p>
<p>To the extent that low-<italic>R</italic> interactions make up a sizeable proportion of all interactions tested in a field, interaction effects in this field will on average have low PPVs and thus poor replicability. Thus, one reason for interactions’ poorer replicability may be that interactions tested in recent psychological research <italic>on average</italic> had lower <italic>R</italic>s than main effects. More precisely, we will argue that some interactions are particularly prone to having low <italic>R</italic>s (whereas others may have high <italic>R</italic>s).</p></sec>
<sec><title>The Replicability of Interaction Effects: The Current Perspective</title>
<p>From the current perspective, interaction effects should have a lower replicability than main effects when, a) reported interaction effects in a field have <italic>on average</italic> lower <italic>R</italic>s than main effects in this field, when b) reported interaction effects in a field are <italic>on average</italic> based on studies with lower power than main effects in this field, and when c) there is a combination of the previous factors. We are convinced that there is reason to believe that all of these factors came true in past research on interaction effects: As described above, sample size requirements based on interactions’ shape have only been described recently (<xref ref-type="bibr" rid="r21">Lakens, 2020</xref>; <xref ref-type="bibr" rid="r32">Simonsohn, 2014</xref>; <xref ref-type="bibr" rid="r34">Sommet et al., 2023</xref>). Thus, many studies were probably strongly underpowered, even more so than they were for finding main effects (<xref ref-type="bibr" rid="r13">Fraley &amp; Vazire, 2014</xref>). Furthermore, we assume that more interactions than main effects had a low pre-study probability of being true (in other words, that interaction effects on average had a lower <italic>R</italic> than main effects) due to common research strategies outlined below.</p>
<p>First, so-called ‘control variables’ (e.g., gender, age) may be included as moderators into research designs based on sparse theoretical foundations and few previous findings, making them low <italic>R</italic> by default. Second, researchers may first establish a main effect based on theoretical or empirical grounds and then start looking for an interaction qualifying (or moderating) that main effect. In this case, the resulting effect will be low <italic>R</italic> and have an attenuating shape. This tendency may have been enhanced by the ‘hype’ around moderation and mediation in at least some subfields of psychological research, where finding evidence for mediation greatly increased a paper’s chances to be published in top journals (<xref ref-type="bibr" rid="r11">Fiedler et al., 2018</xref>). Although mediational analysis itself does not involve testing for interactions, interactions still play a role in mediational analysis (<xref ref-type="bibr" rid="r25">MacKinnon et al., 2007</xref>): First, because an assumption for tests of pure mediation is that the independent variable and the mediator variable do not interact. Thus, any test for mediation should include a test for an interaction between the mediator and the independent variable (<xref ref-type="bibr" rid="r25">MacKinnon et al., 2007</xref>). Second, researchers may test for moderated mediation or mediated moderation, both of which again require testing for interactions (as moderation is statistically modelled as an interaction in moderator analysis) (<xref ref-type="bibr" rid="r25">MacKinnon et al., 2007</xref>). Furthermore, it seems safe to say that many researchers regarded interactions as more interesting and more newsworthy than main effects, again increasing the incentive for researchers to search for these interactions (see <xref ref-type="bibr" rid="r29">Rohrer &amp; Arslan, 2021</xref>, for a similar point). Third, another common research strategy may have been to try and combine two (more or less established) theories in a novel way. The resulting hypotheses were often interactional in nature, and, due to their novelty, low <italic>R</italic>.</p>
<p>We would like to strongly emphasize that we do not object towards any of the above-described research strategies <italic>per se</italic>. Quite the contrary, some of them may be essential for progress in psychology; however, <italic>only</italic> if performed correctly. Thus, both when planning studies and when interpreting them, researchers should be aware of the role of interactions’ shape and interactions’ <italic>R</italic> and they should proceed accordingly. To the extent that this was not the case in past studies, interactions may have had both lower power and lower <italic>R</italic>s than main effects and thus should be less likely to replicate. For future research, we hope that researchers take the respective factors into account, starting from the choice of research strategy, continuing when planning studies and finally when interpreting their results. The following framework might be helpful for this end.</p></sec>
<sec><title>Combining Power and R for Interaction Effects</title>
<p>It seems possible to combine the shape of hypothesized interaction effects (with its implications for sample size planning) with their prior probability of being true, leading to a two-dimensional grid (<xref ref-type="fig" rid="f2">Figure2</xref>). The y-axis represents the shape of interaction effects and depicts a continuum from ordinal to disordinal (or crossover). The x-axis represents the prior probability of being true of interaction effects and depicts a continuum from low-<italic>R</italic> to high-<italic>R</italic>.</p><fig id="f2" position="anchor" fig-type="figure" orientation="portrait"><label>Figure 2</label><caption>
<title>Combination of Interaction Effects’ Shape With Their <italic>R</italic> in a Two-Dimensional Grid</title><p><italic>Note</italic>. The y-axis represents the shape of interaction effects from ordinal to disordinal. The x-axis represents their prior probability of being true from low to high.</p></caption><graphic xlink:href="ejop.14957-f2" position="anchor" orientation="portrait"/></fig>
<p>Sector A appears to combine the best of both worlds: A high prior probability of being true with a shape that requires comparably small sample sizes in order to achieve sufficient power. Researchers who can predict a disordinal interaction on theoretical grounds can thus hope to find an interaction effect that has a high likelihood of being true and thus a high likelihood of being replicable based on comparably small sample sizes. Consequentially, it is precisely these kinds of interactions that readers should place most trust into.</p>
<p>Sector B contains potential interaction effects that appear to be important for theoretical and practical reasons, however that are hard to detect statistically: Both from a theoretical and an applied perspective it might be important to know whether a main effect is attenuated (e.g., when an intervention works better for women than for men). Thus, research on these effects seems worthwhile to conduct. However, researchers are well advised to utilize appropriate sample sizes and appropriate sample size planning. Sample size planning should be based on expected patterns of means (<xref ref-type="bibr" rid="r21">Lakens, 2020</xref>) and the exact nature of the expected attenuation (<xref ref-type="bibr" rid="r32">Simonsohn, 2014</xref>; see also <xref ref-type="bibr" rid="r3">Blake &amp; Gangestad, 2020</xref>, for research on attenuated interactions). For this end, it might be helpful to use a software that allows entering hypothesized effects directly via their means (e.g., Superpower; <xref ref-type="bibr" rid="r23">Lakens &amp; Caldwell, 2021</xref>; see <xref ref-type="bibr" rid="r23">Lakens &amp; Caldwell, 2021</xref>, for more suggestions) or that allows drawing the shape of the expected interaction (INTxPower; <xref ref-type="bibr" rid="r34">Sommet et al., 2023</xref>). Depending on underlying effect sizes and the exact nature of the assumed attenuation, sample sizes may need to be very large in order to achieve appropriate power. Attenuated interaction effects found in small sample studies should be treated with care.</p>
<p>Sector C can be considered the ‘danger zone’ of research on interaction effects, as it combines the worst of both worlds: A low prior probability of being true with a shape that requires large sample sizes in order to achieve sufficient power. In fact, depending on underlying effect sizes and the exact nature of the assumed attenuation, these requirements may not only be challenging but sometimes hard to meet (<xref ref-type="bibr" rid="r16">Gelman et al., 2021</xref>; <xref ref-type="bibr" rid="r32">Simonsohn, 2014</xref>). Interaction effects fall in this sector when researchers test for attenuated moderation without having strong theoretical reason to do so. An example for this might be ordinal interactions with gender, that arise as the result of including gender as a control variable. If researchers insist on testing for such interactions, they are well advised to base their conclusions on sufficiently large sample sizes, taking both low <italic>R</italic> and the interaction’s shape into account. Again, it might be helpful to use a software that allows entering hypothesized effects directly via their means (<xref ref-type="bibr" rid="r23">Lakens &amp; Caldwell, 2021</xref>; <xref ref-type="bibr" rid="r34">Sommet et al., 2023</xref>). Furthermore, when interpreting such interactions both authors and readers of the literature are well advised to consider that these interactions have a higher likelihood of being false positive and a lower likelihood of replicating, particularly when they are based on small sample sizes. Thus, neither theoretical conclusions, nor consequential real-world decisions nor the allocation of future research resources should easily be based on them.</p>
<p>Sector D contains potential interaction effects that seem to be less problematic than the ones in Sector C, however more problematic than the ones in Sector A. Although these effects also have a comparably low prior probability of being true, they are less challenging from a statistical perspective as they require smaller sample sizes due to their disordinal shape. Still, researchers might be well advised to treat them with care after they have been reported.</p>
<p>As sample sizes in order to investigate predictions from Sectors C and D can quickly need to be very large, researchers might want to ask themselves whether the potential benefits are worth the costs (e.g., <xref ref-type="bibr" rid="r27">Miller &amp; Ulrich, 2016</xref>). For example, if investigating one potential interaction effect from Sector C requires a number of participants that would suffice for investigating several hypotheses from Sectors A or B, researchers need to decide whether the potential increase in knowledge regarding the effect is worth the effort.</p></sec>
<sec><title>General Recommendations for Risky Predictions</title>
<p><xref ref-type="bibr" rid="r21">Lakens (2020)</xref> cautions researchers not to shy away from “risky predictions”, just because these seem effortful to investigate. We fully agree. In the present framework, risky predictions can be found in Sectors C and D (i.e., risky predictions are predictions with a low prior probability of being true). The present framework allows deriving some recommendations on how to investigate risky predictions (see also <xref ref-type="bibr" rid="r37">Wilson &amp; Wixted, 2018</xref>, for recommendations on low-<italic>R</italic> research; and <xref ref-type="bibr" rid="r3">Blake &amp; Gangestad, 2020</xref>, for research on attenuated interactions). Generally, for given <italic>R</italic>s, PPVs can be improved by increasing power and by lowering alpha. Thus, when investigating low-<italic>R</italic> hypotheses, researchers can, a) increase power by increasing sample sizes (while holding alpha constant), b) increase power by utilizing suitable experimental designs (e.g., within-participants designs have higher power than between-participants designs), <italic>if possible</italic>, c) increase power by increasing the reliability of the dependent variable<xref ref-type="fn" rid="fn5"><sup>5</sup></xref><fn id="fn5"><label>5</label>
<p>Whereas most of the literature on improving power in psychological research focusses on sample sizes, improving reliability seems to be a bit overlooked, potentially because researchers assume that their variables are reliable anyway. However, if there is room for improving reliability, the effects on power are substantial (see <xref ref-type="bibr" rid="r34">Sommet et al., 2023</xref>, for an illustration).</p></fn> (<xref ref-type="bibr" rid="r34">Sommet et al., 2023</xref>), and d) use lower alpha levels (while holding power constant) (see also <xref ref-type="bibr" rid="r4">Benjamin et al., 2018</xref>, for the benefits of lowering alpha levels). These recommendations beg the question: Increase as compared to what, or lower to what extent? Unfortunately, no accepted guidelines exist as of yet. However, one possibility would be to define a desired PPV for an assumed <italic>R</italic> and then to solve <xref ref-type="disp-formula" rid="e1">Equation 1</xref> for power and alpha. This, of course, requires a precise assumption regarding <italic>R</italic>. <xref ref-type="bibr" rid="r27">Miller and Ulrich (2016</xref>, p. 685) discuss different methods of estimating <italic>R</italic>. These methods range from “researchers’ guesses” over “record keeping, and discussion with colleagues” to analyzing available evidence with specific statistical techniques (e.g., <italic>p</italic>-curves, <xref ref-type="bibr" rid="r33">Simonsohn et al., 2014</xref>). In addition to these general considerations, in the next paragraph we offer a heuristic approach for approximating the strength of <italic>R</italic> if direct evidence is unavailable, i.e., when studies have not yet tested the hypothesis in question.</p></sec>
<sec><title>A Heuristic Approach for Planning Interaction Research Based on R and Power</title>
<p>Researchers interested in planning their research project based on the PPV, <italic>R</italic>, power and α can use the algorithm depicted in <xref ref-type="fig" rid="f3">Figure 3</xref> as an orientation. The left-hand side of <xref ref-type="fig" rid="f3">Figure 3</xref> presents a heuristic approach for approximating the strength of <italic>R</italic>. The right-hand side of the figure shows what levels of power and α are needed in order to obtain a specific PPV, based on the approximations of R made on the left-hand side. Thus, instead of solving <xref ref-type="disp-formula" rid="e1">Equation 1</xref> for individual values of <italic>R</italic> and power, <xref ref-type="fig" rid="f3">Figure 3</xref> presents a simplification for selected levels of <italic>R</italic> and power. <xref ref-type="fig" rid="f3">Figure 3</xref> shows the PPV for different combinations of <italic>R</italic>, power (1 – β) and α. We combine three values of <italic>R</italic> (.5, .25, and .1) with three values of power (.1, .5, and .8) and two values of α (.05 and .01). The impact of these estimates on the PPV is shown on the right-hand side of the flow chart. For example, for hypotheses with low <italic>R</italic>, a statistically significant result — even from high-powered studies — still leaves considerable uncertainty about the truth of the result. When α is lowered to .01, the PPV increases substantially, and particularly so for either low-power or low-<italic>R</italic> research, or for combinations thereof.</p><fig id="f3" position="anchor" fig-type="figure" orientation="portrait"><label>Figure 3</label><caption>
<title>A Heuristic Approach for Planning Research Based on <italic>R</italic>, the PPV and α</title></caption><graphic xlink:href="ejop.14957-f3" position="anchor" orientation="portrait"/></fig>
<p>However, low levels of power mean that researchers only have a small chance of finding an effect given that it exists, even when the corresponding PPVs can become large. For example, even when the combination of <italic>R</italic> = .25, power = .5 and α = .01 means that a significant result represents a true effect in 93% of cases, still only 50% of true effects become significant in the first place.</p>
<p>In this approach, R is influenced by two key factors (left-hand side of <xref ref-type="fig" rid="f3">Figure 3</xref>): (1) the plausibility of the hypothesis, and (2) indirect supporting evidence. Plausibility can be further divided into mechanistic plausibility and theoretical plausibility.</p>
<p><italic>Mechanistic plausibility</italic> assesses whether we know of a mechanism supporting the hypothesis. For example, in medical research, a physiological mechanism may be established in mice, before respective hypotheses are tested in humans. Likewise, in psychological research, a hypothesis that is based on an established process may seem more plausible than a hypothesis which is not based on an established process. <italic>Theoretical plausibility</italic> assesses whether a hypothesis is aligned with a well-established theoretical framework. For example, in line with the Theory of Planned Behavior, it seems more plausible that subjective norms have a positive association with intentions than that they have a negative association with intentions (<xref ref-type="bibr" rid="r1">Ajzen, 1991</xref>). Plausibility is related to the definition of <italic>R</italic> as the base rate of true effects tested in a field, because a field that tests more plausible effects will on average have a higher proportion of true effects than a field that tests less plausible effects.</p>
<p>Besides the plausibility of the hypothesis, assessing the extent of indirect evidence in support of the hypothesis can serve to inform estimates of <italic>R</italic>. Indirectness is a well-established concept in evidence-based medicine (EBM) (<xref ref-type="bibr" rid="r18">Guyatt et al., 2011</xref>). Indirectness can refer to <italic>Indirectness of Population</italic> (e.g., when gender moderates the effectiveness of a psychological intervention in a specific population, it is plausible to expect a respective interaction in a related population), <italic>Indirectness of Intervention</italic> (e.g., when gender moderates the effectiveness of a specific intervention, it is plausible to expect a respective interaction for a related intervention), and <italic>Indirectness of Outcome</italic> (e.g., when gender moderates the effectiveness of a specific intervention with regard to a specific dependent variable, it is plausible to expect a respective interaction with regard to a related dependent variable).<xref ref-type="fn" rid="fn6"><sup>6</sup></xref><fn id="fn6"><label>6</label>
<p>In EBM, also the Indirectness of Comparator can be assessed. However, it seems difficult to find a psychological equivalent.</p></fn> Again, the relation to the definition of <italic>R</italic> is that a field that tests effects that are more strongly supported by indirect evidence will on average have a higher proportion of true effects than a field that tests effects that are less supported by indirect evidence.</p>
<p>Researchers interested in planning their research project based on the PPV, <italic>R</italic>, power and α can use the algorithm in <xref ref-type="fig" rid="f3">Figure 3</xref> as follows: First, they need to arrive at an estimate of <italic>R</italic> based on the mechanistic and theoretical plausibility of their hypothesis and indirect evidence in support of their hypothesis. We suggest adopting some generic levels of <italic>R</italic> depending on the plausibility and the availability of indirect evidence: When a hypothesis seems implausible and there is no indirect evidence, we suggest adopting a generic <italic>R</italic> of .10. When a hypothesis is either plausible, or there is indirect evidence, we suggest adopting a generic <italic>R</italic> of .25. When there is both plausibility and indirect evidence, we suggest adopting a generic <italic>R</italic> of .5 Of course, these values do not represent the “real <italic>R</italic>s” of the respective hypothesis, but they offer some orientation.</p>
<p>Second, researchers need to decide what level of power they want to achieve, both for optimizing power itself and with regard to the PPV. For this decision, it is helpful to consider what kind of interaction they expect (disordinal, ordinal, fully attenuated or partially attenuated), what kind of design they aim to realize (within- or between-participants), and the reliability of the dependent variable. Then, they can calculate an according sample size. When the necessary sample size seems too large, researchers can try to switch from a between-persons to a within-persons design or they can try to improve the dependent variable’s reliability. Third, they need to decide for an a priori α level in order to arrive at a PPV that they consider appropriate. The algorithm can also be used in reverse: When researchers read a paper reporting a significant result, they can try to estimate the relevant parameters and then they can decide how much trust to put into the result.</p>
<p>Finally, researchers might take into account the recommendations made by <xref ref-type="bibr" rid="r29">Rohrer and Arslan (2021)</xref>, who show that researching interactions may often require more specific research questions than researchers may be aware of.</p></sec></sec>
<sec sec-type="other"><title>Potential Objections, Potential Misunderstandings, Limitations and Unintended Consequences</title>
<p>As we have tried to make clear throughout the manuscript, we do not intend to discourage researchers from researching into interactions, from investigating risky predictions and from testing novel hypotheses. Quite the contrary, we hope to contribute a framework that is able to support and structure researchers’ work in these areas; and we hope to correct an overly pessimistic and somewhat diffuse assessment of research into interaction effects (e.g., <xref ref-type="bibr" rid="r16">Gelman et al., 2021</xref>). Interactions are at the heart of psychological theorizing, research and application, not the least because behavior has been fundamentally understood as being an interaction between person and situation (<xref ref-type="bibr" rid="r14">Furr &amp; Funder, 2021</xref>; <xref ref-type="bibr" rid="r24">Lewin, 1951</xref>). Consequentially, the very idea that psychologists should refrain from research on interactions seems nonsensical and potentially disastrous to us. Thus, we agree with Rohrer and Arslan when they write “instead of putting interaction research on hiatus, we should strive for improved interaction research” (<xref ref-type="bibr" rid="r29">Rohrer &amp; Arslan, 2021</xref>, p. 14). In the remainder of the manuscript, we aim to anticipate potential objections towards the current approach and to discuss some limitations.</p>
<p>Some authors have criticized the discussion on methodological practices in psychology in general, and the discussion on psychology’s replication rates in particular, for overly focusing on false-positive findings, while neglecting the costs of false-negative ones (<xref ref-type="bibr" rid="r12">Fiedler et al., 2012</xref>). We would like to point out that the present considerations address both concerns. First, high power is a remedy against both false negatives and false positives. That is, fields with higher power have on average less false-positive and less false-negative effects than fields with lower power (<xref ref-type="bibr" rid="r5">Button et al., 2013</xref>; <xref ref-type="bibr" rid="r37">Wilson &amp; Wixted, 2018</xref>). Furthermore, considering assumed effects’ <italic>R</italic> can be helpful for avoiding false negatives and false positives (<xref ref-type="bibr" rid="r5">Button et al., 2013</xref>; <xref ref-type="bibr" rid="r37">Wilson &amp; Wixted, 2018</xref>).</p>
<p>Even more generally, some authors have criticized the discussion on methodological practices in psychology for neglecting theoretical aspects (e.g., <xref ref-type="bibr" rid="r10">Fiedler, 2018</xref>). In this context, we would like to emphasize that we do not perceive the present article to be primarily or even purely methodological in nature. Instead, with this article we attempt to demonstrate how theory and methodology are inextricably linked when researching into interactions. For example, it is impossible to plan a sample size based on a hypothesized interaction’s shape, when there is no theory suitable for predicting said shape. Likewise, only properly formulated theories allow for considering an effect’s <italic>R</italic>. In both these examples, theoretical considerations are actually superordinate to methodological ones (<xref ref-type="bibr" rid="r11">Fiedler et al., 2018</xref>): First researchers need to think about their theory, then they can plan their study. However, only high-quality theories can be useful for making reliable predictions and deriving diagnostic hypotheses about an interaction’s shape or for informing <italic>R</italic>s (<xref ref-type="bibr" rid="r9">Fiedler, 2017</xref>). Thus, theory construction and formulation are of utmost importance in the present context (again, see <xref ref-type="bibr" rid="r29">Rohrer &amp; Arslan, 2021</xref>, for a similar point). Indeed, we agree with <xref ref-type="bibr" rid="r9">Fiedler (2017</xref>, p. 1) that only a “theory-driven cumulative science” can hope to successfully solve the apparent dilemma of how to deal with potentially important, but low-<italic>R</italic> research. With the present paper, we aim to contribute to this endeavor.</p>
<p>Furthermore, although the present paper mostly refers to confirmatory research, we do not aim to devalue exploratory research. Instead, we believe that exploratory research is highly relevant for progress in psychological research, and that it should play a more prominent role (<xref ref-type="bibr" rid="r9">Fiedler, 2017</xref>). Indeed, it seems possible to argue that the alleged crisis in psychological research is partly due to a disregard for exploratory research in psychology, forcing researchers who engaged in valuable exploratory research to disguise it as being confirmatory. Thus, clearly separating exploratory and confirmatory research while appreciating both seems to be crucial.</p>
<p>In the present paper, we sometimes oversimplify matters in order to facilitate the presentation of our arguments and thus make the manuscript more accessible. For example, in some instances, we employ a binary reasoning instead of the proper continuous one. Thus, we refer to <italic>R</italic> and power as being low or high, although both <italic>R</italic> and power vary continuously from 0 to 1. Consequentially, of course, the proposed grid does not represent four kinds of qualitatively different effects. Instead, it represents a simplification of the underlying continuous reasoning. The same applies to the nature of interactions. In order to simplify matters, recent papers on interactions often refer to categorical predictors in a 2 x 2 design (e.g., <xref ref-type="bibr" rid="r21">Lakens, 2020</xref>; <xref ref-type="bibr" rid="r34">Sommet et al., 2023</xref>). Accordingly, software for calculating sample sizes for interaction research may also be limited to categorical predictors in a 2 x 2 design. Future work might try to address these limitations by addressing more complex designs and continuous moderators.</p>
<p>Throughout the present paper, we refer to effects as either existing or not, and to findings being false-positive or false-negative. We are aware that some authors reject the concepts of false-positive and false-negative findings or Type-1 and Type-2 errors because they invite researchers to think about theories, hypotheses and effects in a binary manner (e.g., an effect is true or a hypothesis is rejected), although effects vary and the evidence for effects is continuous (e.g., <xref ref-type="bibr" rid="r15">Gelman, 2013</xref>). From this perspective, instead of determining whether an effect exists or not, it makes more sense to try and estimate the probabilities of varying sizes of an effect depending on different values of additional parameters (<xref ref-type="bibr" rid="r15">Gelman, 2013</xref>; <xref ref-type="bibr" rid="r17">Gelman et al., 2012</xref>). This does not render reflections about sample sizes and prior probabilities (in this case of different sizes of effects) obsolete, quite the contrary. For example, in the context of Bayesian modelling, choosing and justifying prior distributions of potential effect sizes are crucial steps (<xref ref-type="bibr" rid="r35">van de Schoot et al., 2021</xref>).</p>
<p>Indeed, this might be a substantial objection towards the present manuscript: Why retain the paradigm of NHST instead of abandoning it and turning to Bayesian modelling entirely? While some may consider this to be the most fruitful option for psychological research (e.g., <xref ref-type="bibr" rid="r36">Wagenmakers et al., 2018</xref>), we do not consider it likely that psychologists will abandon NHST entirely any time soon. Thus, just like other authors (e.g., <xref ref-type="bibr" rid="r4">Benjamin et al., 2018</xref>; <xref ref-type="bibr" rid="r22">Lakens, 2021</xref>), we consider it worthwhile to try and contribute to the methodological approach that will probably be around for quite some time in the future. However, thinking about the pre-study probabilities of their effects being true might gently introduce psychologists to a more Bayesian way of thinking without them having to make a ‘hard transition’ from frequentist to Bayesian statistics in the first place. We hope that in the future, researchers in psychology will use the presented framework in order to combine theoretical and statistical considerations when conducting or interpreting research on interactions. This will hopefully contribute to high-quality research on interactions, a line of research that we consider to be essential in order to develop robust knowledge in both basic and applied psychology.</p>
</sec>
</body>
<back>
	
	<fn-group content-type="author-contribution">
		<fn fn-type="con">
			<p><italic>Geoffrey Schweizer</italic>: Conceptualisation, Writing - original draft, Supervision. <italic>Maximilian Köppel</italic>: Conceptualisation, Writing - original draft, Software.
			</p>
		</fn>
	</fn-group>
	
<ref-list><title>References</title>
<ref id="r1"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Ajzen</surname>, <given-names>I.</given-names></string-name></person-group> (<year>1991</year>). <article-title>The theory of planned behavior.</article-title> <source>Organizational Behavior and Human Decision Processes</source>, <volume>50</volume>(<issue>2</issue>), <fpage>179</fpage>–<lpage>211</lpage>. <pub-id pub-id-type="doi">10.1016/0749-5978(91)90020-T</pub-id></mixed-citation></ref>
<ref id="r2"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Altmejd</surname>, <given-names>A.</given-names></string-name>, <string-name name-style="western"><surname>Dreber</surname>, <given-names>A.</given-names></string-name>, <string-name name-style="western"><surname>Forsell</surname>, <given-names>E.</given-names></string-name>, <string-name name-style="western"><surname>Huber</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Imai</surname>, <given-names>T.</given-names></string-name>, <string-name name-style="western"><surname>Johanneson</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Kirchler</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Nave</surname>, <given-names>G.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Camerer</surname>, <given-names>C.</given-names></string-name></person-group> (<year>2019</year>). <article-title>Predicting the replicability of social science lab experiments.</article-title> <source>PLoS One</source>, <volume>14</volume>(<issue>12</issue>), <elocation-id>e0225826</elocation-id>. <pub-id pub-id-type="doi">10.1371/journal.pone.0225826</pub-id><pub-id pub-id-type="pmid">31805105</pub-id></mixed-citation></ref>
<ref id="r3"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Blake</surname>, <given-names>K. R.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Gangestad</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2020</year>). <article-title>On attenuated interactions, measurement error, and statistical power: Guidelines for social and personality psychologists.</article-title> <source>Personality &amp; Social Psychology Bulletin</source>, <volume>46</volume>(<issue>12</issue>), <fpage>1702</fpage>–<lpage>1711</lpage>. <pub-id pub-id-type="doi">10.1177/0146167220913363</pub-id><pub-id pub-id-type="pmid">32208875</pub-id></mixed-citation></ref>
<ref id="r4"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Benjamin</surname>, <given-names>D. J.</given-names></string-name>, <string-name name-style="western"><surname>Berger</surname>, <given-names>J. O.</given-names></string-name>, <string-name name-style="western"><surname>Johannesson</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Nosek</surname>, <given-names>B. A.</given-names></string-name>, <string-name name-style="western"><surname>Wagenmakers</surname>, <given-names>E.-J.</given-names></string-name>, <string-name name-style="western"><surname>Berk</surname>, <given-names>R.</given-names></string-name>, <string-name name-style="western"><surname>Bollen</surname>, <given-names>K. A.</given-names></string-name>, <string-name name-style="western"><surname>Brembs</surname>, <given-names>B.</given-names></string-name>, <string-name name-style="western"><surname>Brown</surname>, <given-names>L.</given-names></string-name>, <string-name name-style="western"><surname>Camerer</surname>, <given-names>C.</given-names></string-name>, <string-name name-style="western"><surname>Cesarini</surname>, <given-names>D.</given-names></string-name>, <string-name name-style="western"><surname>Chambers</surname>, <given-names>C. D.</given-names></string-name>, <string-name name-style="western"><surname>Clyde</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Cook</surname>, <given-names>T. D.</given-names></string-name>, <string-name name-style="western"><surname>De Boeck</surname>, <given-names>P.</given-names></string-name>, <string-name name-style="western"><surname>Dienes</surname>, <given-names>Z.</given-names></string-name>, <string-name name-style="western"><surname>Dreber</surname>, <given-names>A.</given-names></string-name>, <string-name name-style="western"><surname>Easwaran</surname>, <given-names>K.</given-names></string-name>, <string-name name-style="western"><surname>Efferson</surname>, <given-names>C.</given-names></string-name>, <etal>. . .</etal> <string-name name-style="western"><surname>Johnson</surname>, <given-names>V. E.</given-names></string-name></person-group> (<year>2018</year>). <article-title>Redefine statistical significance.</article-title> <source>Nature Human Behaviour</source>, <volume>2</volume>, <fpage>6</fpage>–<lpage>10</lpage>. <pub-id pub-id-type="doi">10.1038/s41562-017-0189-z</pub-id><pub-id pub-id-type="pmid">30980045</pub-id></mixed-citation></ref>
<ref id="r5"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Button</surname>, <given-names>K. S.</given-names></string-name>, <string-name name-style="western"><surname>Ioannidis</surname>, <given-names>J. P. A.</given-names></string-name>, <string-name name-style="western"><surname>Mokrysz</surname>, <given-names>C.</given-names></string-name>, <string-name name-style="western"><surname>Nosek</surname>, <given-names>B. A.</given-names></string-name>, <string-name name-style="western"><surname>Flint</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Robinson</surname>, <given-names>E. S. J.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Munafò</surname>, <given-names>M. R.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Power failure: Why small sample sizes undermine the reliability of neuroscience.</article-title> <source>Nature Reviews Neuroscience</source>, <volume>14</volume>, <fpage>365</fpage>–<lpage>376</lpage>. <pub-id pub-id-type="doi">10.1038/nrn3475</pub-id><pub-id pub-id-type="pmid">23571845</pub-id></mixed-citation></ref>
<ref id="r6"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Camerer</surname>, <given-names>C. F.</given-names></string-name>, <string-name name-style="western"><surname>Dreber</surname>, <given-names>A.</given-names></string-name>, <string-name name-style="western"><surname>Forsell</surname>, <given-names>E.</given-names></string-name>, <string-name name-style="western"><surname>Ho</surname>, <given-names>T.-H.</given-names></string-name>, <string-name name-style="western"><surname>Huber</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Johanneson</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Kirchler</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Almenberg</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Altmejd</surname>, <given-names>A.</given-names></string-name>, <string-name name-style="western"><surname>Chan</surname>, <given-names>T.</given-names></string-name>, <string-name name-style="western"><surname>Heikensten</surname>, <given-names>E.</given-names></string-name>, <string-name name-style="western"><surname>Holzmeister</surname>, <given-names>F.</given-names></string-name>, <string-name name-style="western"><surname>Imai</surname>, <given-names>T.</given-names></string-name>, <string-name name-style="western"><surname>Isaksson</surname>, <given-names>S.</given-names></string-name>, <string-name name-style="western"><surname>Nave</surname>, <given-names>G.</given-names></string-name>, <string-name name-style="western"><surname>Pfeiffer</surname>, <given-names>T.</given-names></string-name>, <string-name name-style="western"><surname>Razen</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Wu</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Evaluating replicability of laboratory experiments in economics.</article-title> <source>Science</source>, <volume>351</volume>, <fpage>1433</fpage>–<lpage>1436</lpage>. <pub-id pub-id-type="doi">10.1126/science.aaf0918</pub-id><pub-id pub-id-type="pmid">26940865</pub-id></mixed-citation></ref>
<ref id="r7"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Camerer</surname>, <given-names>C. F.</given-names></string-name>, <string-name name-style="western"><surname>Dreber</surname>, <given-names>A.</given-names></string-name>, <string-name name-style="western"><surname>Holzmeister</surname>, <given-names>F.</given-names></string-name>, <string-name name-style="western"><surname>Ho</surname>, <given-names>T.-H.</given-names></string-name>, <string-name name-style="western"><surname>Huber</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Johanneson</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Kirchler</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Nave</surname>, <given-names>G.</given-names></string-name>, <string-name name-style="western"><surname>Nosek</surname>, <given-names>B. A.</given-names></string-name>, <string-name name-style="western"><surname>Pfeiffer</surname>, <given-names>T.</given-names></string-name>, <string-name name-style="western"><surname>Altmejd</surname>, <given-names>A.</given-names></string-name>, <string-name name-style="western"><surname>Buttrick</surname>, <given-names>N.</given-names></string-name>, <string-name name-style="western"><surname>Chan</surname>, <given-names>T.</given-names></string-name>, <string-name name-style="western"><surname>Chen</surname>, <given-names>Y.</given-names></string-name>, <string-name name-style="western"><surname>Forsell</surname>, <given-names>E.</given-names></string-name>, <string-name name-style="western"><surname>Gampa</surname>, <given-names>A.</given-names></string-name>, <string-name name-style="western"><surname>Heikensten</surname>, <given-names>E.</given-names></string-name>, <string-name name-style="western"><surname>Hummer</surname>, <given-names>L.</given-names></string-name>, <string-name name-style="western"><surname>Imai</surname>, <given-names>T.</given-names></string-name>, <etal>. . .</etal> <string-name name-style="western"><surname>Wu</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2018</year>). <article-title>Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015.</article-title> <source>Nature Human Behaviour</source>, <volume>2</volume>, <fpage>637</fpage>–<lpage>644</lpage>. <pub-id pub-id-type="doi">10.1038/s41562-018-0399-z</pub-id><pub-id pub-id-type="pmid">31346273</pub-id></mixed-citation></ref>
<ref id="r8"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Ebersole</surname>, <given-names>C. R.</given-names></string-name>, <string-name name-style="western"><surname>Atherton</surname>, <given-names>O. E.</given-names></string-name>, <string-name name-style="western"><surname>Belanger</surname>, <given-names>A. L.</given-names></string-name>, <string-name name-style="western"><surname>Skulborstad</surname>, <given-names>H. M.</given-names></string-name>, <string-name name-style="western"><surname>Allen</surname>, <given-names>J. M.</given-names></string-name>, <string-name name-style="western"><surname>Banks</surname>, <given-names>J. B.</given-names></string-name>, <string-name name-style="western"><surname>Baranski</surname>, <given-names>E.</given-names></string-name>, <string-name name-style="western"><surname>Bernstein</surname>, <given-names>M. J.</given-names></string-name>, <string-name name-style="western"><surname>Bonfiglio</surname>, <given-names>D. B. V.</given-names></string-name>, <string-name name-style="western"><surname>Boucher</surname>, <given-names>L.</given-names></string-name>, <string-name name-style="western"><surname>Brown</surname>, <given-names>E. R.</given-names></string-name>, <string-name name-style="western"><surname>Budiman</surname>, <given-names>N. I.</given-names></string-name>, <string-name name-style="western"><surname>Cairo</surname>, <given-names>A. H.</given-names></string-name>, <string-name name-style="western"><surname>Capaldi</surname>, <given-names>C. A.</given-names></string-name>, <string-name name-style="western"><surname>Chartier</surname>, <given-names>C. R.</given-names></string-name>, <string-name name-style="western"><surname>Chung</surname>, <given-names>J. M.</given-names></string-name>, <string-name name-style="western"><surname>Cicero</surname>, <given-names>D. C.</given-names></string-name>, <string-name name-style="western"><surname>Coleman</surname>, <given-names>J. A.</given-names></string-name>, <string-name name-style="western"><surname>Conway</surname>, <given-names>J. G.</given-names></string-name>, <etal>. . .</etal> <string-name name-style="western"><surname>Nosek</surname>, <given-names>B. A.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Many Labs 3: Evaluating participant pool quality across the academic semester via replication.</article-title> <source>Journal of Experimental Social Psychology</source>, <volume>67</volume>, <fpage>68</fpage>–<lpage>82</lpage>. <pub-id pub-id-type="doi">10.1016/j.jesp.2015.10.012</pub-id></mixed-citation></ref>
<ref id="r9"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Fiedler</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2017</year>). <article-title>What constitutes strong psychological science? The (neglected) role of diagnosticity and a priori theorizing.</article-title> <source>Perspectives on Psychological Science: A Journal of the Association for Psychological Science</source>, <volume>12</volume>(<issue>1</issue>), <fpage>46</fpage>–<lpage>61</lpage>. <pub-id pub-id-type="doi">10.1177/1745691616654458</pub-id><pub-id pub-id-type="pmid">28073328</pub-id></mixed-citation></ref>
<ref id="r10"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Fiedler</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2018</year>). <article-title>The creative cycle and the growth of psychological science.</article-title> <source>Perspectives on Psychological Science: A Journal of the Association for Psychological Science</source>, <volume>13</volume>, <fpage>433</fpage>–<lpage>438</lpage>. <pub-id pub-id-type="doi">10.1177/1745691617745651</pub-id><pub-id pub-id-type="pmid">29961416</pub-id></mixed-citation></ref>
<ref id="r11"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Fiedler</surname>, <given-names>K.</given-names></string-name>, <string-name name-style="western"><surname>Harris</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Schott</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2018</year>). <article-title>Unwarranted inferences from statistical mediation tests — An analysis of articles published in 2015.</article-title> <source>Journal of Experimental Social Psychology</source>, <volume>75</volume>, <fpage>95</fpage>–<lpage>102</lpage>. <pub-id pub-id-type="doi">10.1016/j.jesp.2017.11.008</pub-id></mixed-citation></ref>
<ref id="r12"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Fiedler</surname>, <given-names>K.</given-names></string-name>, <string-name name-style="western"><surname>Kutzner</surname>, <given-names>F.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Krüger</surname>, <given-names>J. I.</given-names></string-name></person-group> (<year>2012</year>). <article-title>The long way from α-error control to validity proper: Problems with a short-sighted false-positive debate.</article-title> <source>Perspectives on Psychological Science: A Journal of the Association for Psychological Science</source>, <volume>7</volume>(<issue>6</issue>), <fpage>661</fpage>–<lpage>669</lpage>. <pub-id pub-id-type="doi">10.1177/1745691612462587</pub-id><pub-id pub-id-type="pmid">26168128</pub-id></mixed-citation></ref>
	<ref id="r13"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Fraley</surname>, <given-names>R. C.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Vazire</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2014</year>). <article-title>The N-Pact Factor: Evaluating the quality of empirical journals with respect to sample size and statistical power.</article-title> <source>PLoS One</source>, <volume>9</volume>(<issue>10</issue>), <elocation-id>e109019</elocation-id>. <pub-id pub-id-type="doi">10.1371/journal.pone.0109019</pub-id><pub-id pub-id-type="pmid">25296159</pub-id></mixed-citation></ref>
<ref id="r14"><mixed-citation publication-type="book">Furr, R. M., &amp; Funder, D. C. (2021). Persons, situations, and person–situation interactions. In O. P. John &amp; R. W. Robins (Eds.), <italic>Handbook of personality: Theory and research</italic> (pp. 667–685). Guilford Press.</mixed-citation></ref>
<ref id="r15"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Gelman</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Interrogating <italic>p</italic>-values.</article-title> <source>Journal of Mathematical Psychology</source>, <volume>57</volume>(<issue>5</issue>), <fpage>188</fpage>–<lpage>189</lpage>. <pub-id pub-id-type="doi">10.1016/j.jmp.2013.03.005</pub-id></mixed-citation></ref>
<ref id="r16"><mixed-citation publication-type="book">Gelman, A., Hill, J., &amp; Vehtari, A. (2021). <italic>Regression and other stories</italic>. Cambridge University Press.</mixed-citation></ref>
<ref id="r17"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Gelman</surname>, <given-names>A.</given-names></string-name>, <string-name name-style="western"><surname>Hill</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Yajima</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Why we (usually) don’t have to worry about multiple comparisons.</article-title> <source>Journal of Research on Educational Effectiveness</source>, <volume>5</volume>(<issue>2</issue>), <fpage>189</fpage>–<lpage>211</lpage>. <pub-id pub-id-type="doi">10.1080/19345747.2011.618213</pub-id></mixed-citation></ref>
<ref id="r18"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Guyatt</surname>, <given-names>G. H.</given-names></string-name>, <string-name name-style="western"><surname>Oxman</surname>, <given-names>A. D.</given-names></string-name>, <string-name name-style="western"><surname>Kunz</surname>, <given-names>R.</given-names></string-name>, <string-name name-style="western"><surname>Woodcock</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Brozek</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Helfand</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Alonso-Coello</surname>, <given-names>P.</given-names></string-name>, <string-name name-style="western"><surname>Falck-Ytter</surname>, <given-names>Y.</given-names></string-name>, <string-name name-style="western"><surname>Jaeschke</surname>, <given-names>R.</given-names></string-name>, <string-name name-style="western"><surname>Vist</surname>, <given-names>G.</given-names></string-name>, <string-name name-style="western"><surname>Akl</surname>, <given-names>E. A.</given-names></string-name>, <string-name name-style="western"><surname>Post</surname>, <given-names>P. N.</given-names></string-name>, <string-name name-style="western"><surname>Norris</surname>, <given-names>S.</given-names></string-name>, <string-name name-style="western"><surname>Meerpohl</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Shukla</surname>, <given-names>V. K.</given-names></string-name>, <string-name name-style="western"><surname>Nasser</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Schünemann</surname>, <given-names>H. J.</given-names></string-name>, &amp; the <collab>GRADE Working Group</collab></person-group>. (<year>2011</year>). <article-title>GRADE guidelines: 8. Rating the quality of evidence — indirectness.</article-title> <source>Journal of Clinical Epidemiology</source>, <volume>64</volume>(<issue>12</issue>), <fpage>1303</fpage>–<lpage>1310</lpage>. <pub-id pub-id-type="doi">10.1016/j.jclinepi.2011.04.014</pub-id><pub-id pub-id-type="pmid">21802903</pub-id></mixed-citation></ref>
	<ref id="r19"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Ioannidis</surname>, <given-names>J. P. A.</given-names></string-name></person-group> (<year>2005</year>). <article-title>Why most published research findings are false.</article-title> <source>PLoS Medicine</source>, <volume>2</volume>(<issue>8</issue>), <elocation-id>e124</elocation-id>. <pub-id pub-id-type="doi">10.1371/journal.pmed.0020124</pub-id><pub-id pub-id-type="pmid">16060722</pub-id></mixed-citation></ref>
<ref id="r20"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Klein</surname>, <given-names>R. A.</given-names></string-name>, <string-name name-style="western"><surname>Ratliff</surname>, <given-names>K. A.</given-names></string-name>, <string-name name-style="western"><surname>Vianello</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Adams</surname>, <given-names>R. B.</given-names>, <suffix>Jr</suffix></string-name>., <string-name name-style="western"><surname>Bahník</surname>, <given-names>S.</given-names></string-name>, <string-name name-style="western"><surname>Bernstein</surname>, <given-names>M. J.</given-names></string-name>, <string-name name-style="western"><surname>Bocian</surname>, <given-names>K.</given-names></string-name>, <string-name name-style="western"><surname>Brandt</surname>, <given-names>M. J.</given-names></string-name>, <string-name name-style="western"><surname>Brooks</surname>, <given-names>B.</given-names></string-name>, <string-name name-style="western"><surname>Brumbaugh</surname>, <given-names>C. C.</given-names></string-name>, <string-name name-style="western"><surname>Cemalcilar</surname>, <given-names>Z.</given-names></string-name>, <string-name name-style="western"><surname>Chandler</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Cheong</surname>, <given-names>W.</given-names></string-name>, <string-name name-style="western"><surname>Davis</surname>, <given-names>W. E.</given-names></string-name>, <string-name name-style="western"><surname>Devos</surname>, <given-names>T.</given-names></string-name>, <string-name name-style="western"><surname>Eisner</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Frankowska</surname>, <given-names>N.</given-names></string-name>, <string-name name-style="western"><surname>Furrow</surname>, <given-names>D.</given-names></string-name>, <string-name name-style="western"><surname>Galliani</surname>, <given-names>E. N.</given-names></string-name>, <etal>. . .</etal> <string-name name-style="western"><surname>Nosek</surname>, <given-names>B. A.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Investigating variation in replicability. A “Many Labs” replication project.</article-title> <source>Social Psychology (Göttingen)</source>, <volume>45</volume>(<issue>3</issue>), <fpage>142</fpage>–<lpage>152</lpage>. <pub-id pub-id-type="doi">10.1027/1864-9335/a000178</pub-id></mixed-citation></ref>
<ref id="r21"><mixed-citation publication-type="web">Lakens, D. (2020, March 29). Effect sizes and power for interactions in ANOVA designs [Web log post]. <ext-link ext-link-type="uri" xlink:href="https://daniellakens.blogspot.com/2020/03/effect-sizes-and-power-for-interactions.html">https://daniellakens.blogspot.com/2020/03/effect-sizes-and-power-for-interactions.html</ext-link></mixed-citation></ref>
<ref id="r22"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Lakens</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2021</year>). <article-title>The practical alternative to the <italic>p</italic> value is the correctly used <italic>p</italic> value.</article-title> <source>Perspectives on Psychological Science: A Journal of the Association for Psychological Science</source>, <volume>16</volume>(<issue>3</issue>), <fpage>639</fpage>–<lpage>648</lpage>. <pub-id pub-id-type="doi">10.1177/1745691620958012</pub-id><pub-id pub-id-type="pmid">33560174</pub-id></mixed-citation></ref>
<ref id="r23"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Lakens</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Caldwell</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2021</year>). <article-title>Simulation-based power analysis for factorial analysis of variance designs.</article-title> <source>Advances in Methods and Practices in Psychological Science</source>, <volume>4</volume>(<issue>1</issue>). <pub-id pub-id-type="doi">10.1177/2515245920951503</pub-id></mixed-citation></ref>
<ref id="r24"><mixed-citation publication-type="book">Lewin, K. (1951). <italic>Field theory in social science</italic>. Harper &amp; Brothers.</mixed-citation></ref>
<ref id="r25"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>MacKinnon</surname>, <given-names>D. P.</given-names></string-name>, <string-name name-style="western"><surname>Fairchild</surname>, <given-names>A. J.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Fritz</surname>, <given-names>M. S.</given-names></string-name></person-group> (<year>2007</year>). <article-title>Mediation analysis.</article-title> <source>Annual Review of Psychology</source>, <volume>58</volume>, <fpage>593</fpage>–<lpage>614</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.psych.58.110405.085542</pub-id><pub-id pub-id-type="pmid">16968208</pub-id></mixed-citation></ref>
<ref id="r26"><mixed-citation publication-type="book">Maxwell, S. E., Delaney, H. D., &amp; Kelley, K. (2018). <italic>Designing experiments and analyzing data: A model comparison perspective</italic> (3<sup>rd</sup> ed.). Routledge.</mixed-citation></ref>
<ref id="r27"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Miller</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Ulrich</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Optimizing research payoff.</article-title> <source>Perspectives on Psychological Science: A Journal of the Association for Psychological Science</source>, <volume>11</volume>(<issue>5</issue>), <fpage>664</fpage>–<lpage>691</lpage>. <pub-id pub-id-type="doi">10.1177/1745691616649170</pub-id><pub-id pub-id-type="pmid">27694463</pub-id></mixed-citation></ref>
	<ref id="r28"><mixed-citation publication-type="journal"><person-group person-group-type="author"><collab>Open Science Collaboration</collab></person-group>. (<year>2015</year>). <article-title>Estimating the reproducibility of psychological science.</article-title> <source>Science</source>, <volume>349</volume>(<issue>6251</issue>), <elocation-id>aac4716</elocation-id>. <pub-id pub-id-type="doi">10.1126/science.aac4716</pub-id><pub-id pub-id-type="pmid">26315443</pub-id></mixed-citation></ref>
<ref id="r29"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Rohrer</surname>, <given-names>J. M.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Arslan</surname>, <given-names>R. C.</given-names></string-name></person-group> (<year>2021</year>). <article-title>Precise answers to vague questions: Issues with interactions.</article-title> <source>Advances in Methods and Practices in Psychological Science</source>, <volume>4</volume>(<issue>2</issue>). <pub-id pub-id-type="doi">10.1177/25152459211007368</pub-id></mixed-citation></ref>
	<ref id="r30"><mixed-citation publication-type="data">Schweizer, G., &amp; Köppel, M. (2025a). <italic>Appendices A &amp; B: Replication projects’ inclusion of interactional hypotheses and Simulation.</italic> PsychOpen GOLD. <pub-id pub-id-type="doi">10.23668/psycharchives.21159 </pub-id></mixed-citation></ref>
<ref id="r31"><mixed-citation publication-type="data">Schweizer, G., &amp; Köppel, M. (2025b). <italic>Code for: In search of the lost interaction: A theoretical and methodological framework for researching interactions.</italic> PsychOpen GOLD. <pub-id pub-id-type="doi">10.23668/psycharchives.16486</pub-id></mixed-citation></ref>
<ref id="r32"><mixed-citation publication-type="web">Simonsohn, U. (2014, March 12). <italic>No-way interactions</italic> [Web log post]. DataColada. <ext-link ext-link-type="uri" xlink:href="http://datacolada.org/17">http://datacolada.org/17</ext-link></mixed-citation></ref>
<ref id="r33"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Simonsohn</surname>, <given-names>U.</given-names></string-name>, <string-name name-style="western"><surname>Nelson</surname>, <given-names>L. D.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Simmons</surname>, <given-names>J. P.</given-names></string-name></person-group> (<year>2014</year>). <article-title><italic>P</italic>-curve: A key to the file-drawer.</article-title> <source>Journal of Experimental Psychology: General</source>, <volume>143</volume>(<issue>2</issue>), <fpage>534</fpage>–<lpage>547</lpage>. <pub-id pub-id-type="doi">10.1037/a0033242</pub-id><pub-id pub-id-type="pmid">23855496</pub-id></mixed-citation></ref>
<ref id="r34"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Sommet</surname>, <given-names>N.</given-names></string-name>, <string-name name-style="western"><surname>Weissman</surname>, <given-names>D. L.</given-names></string-name>, <string-name name-style="western"><surname>Cheutin</surname>, <given-names>N.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Elliot</surname>, <given-names>A. J.</given-names></string-name></person-group> (<year>2023</year>). <article-title>How many participants do I need to test an interaction? Conducting an appropriate power analysis and achieving sufficient power to detect an interaction.</article-title> <source>Advances in Methods and Practices in Psychological Science</source>, <volume>6</volume>(<issue>3</issue>). <pub-id pub-id-type="doi">10.1177/25152459231178728</pub-id></mixed-citation></ref>
	<ref id="r35"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>van de Schoot</surname>, <given-names>R.</given-names></string-name>, <string-name name-style="western"><surname>Depaoli</surname>, <given-names>S.</given-names></string-name>, <string-name name-style="western"><surname>King</surname>, <given-names>R.</given-names></string-name>, <string-name name-style="western"><surname>Kramer</surname>, <given-names>B.</given-names></string-name>, <string-name name-style="western"><surname>Märtens</surname>, <given-names>K.</given-names></string-name>, <string-name name-style="western"><surname>Tadesse</surname>, <given-names>M. G.</given-names></string-name>, <string-name name-style="western"><surname>Vannucci</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Gelman</surname>, <given-names>A.</given-names></string-name>, <string-name name-style="western"><surname>Veen</surname>, <given-names>D.</given-names></string-name>, <string-name name-style="western"><surname>Willemsen</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Yau</surname>, <given-names>C.</given-names></string-name></person-group> (<year>2021</year>). <article-title>Bayesian statistics and modelling.</article-title> <source>Nature Reviews. Methods Primers</source>, <volume>1</volume>(<issue>1</issue>), <elocation-id>1</elocation-id>. <pub-id pub-id-type="doi">10.1038/s43586-020-00001-2</pub-id></mixed-citation></ref>
<ref id="r36"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Wagenmakers</surname>, <given-names>E.-J.</given-names></string-name>, <string-name name-style="western"><surname>Marsman</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Jamil</surname>, <given-names>T.</given-names></string-name>, <string-name name-style="western"><surname>Ly</surname>, <given-names>A.</given-names></string-name>, <string-name name-style="western"><surname>Verhagen</surname>, <given-names>A. J.</given-names></string-name>, <string-name name-style="western"><surname>Love</surname>, <given-names>J.</given-names></string-name>, <string-name name-style="western"><surname>Selker</surname>, <given-names>R.</given-names></string-name>, <string-name name-style="western"><surname>Gronau</surname>, <given-names>Q. F.</given-names></string-name>, <string-name name-style="western"><surname>Šmíra</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Epskamp</surname>, <given-names>S.</given-names></string-name>, <string-name name-style="western"><surname>Matzke</surname>, <given-names>D.</given-names></string-name>, <string-name name-style="western"><surname>Rouder</surname>, <given-names>J. N.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Morey</surname>, <given-names>R. D.</given-names></string-name></person-group> (<year>2018</year>). <article-title>Bayesian inference for psychology. Part I: Theoretical advantages and practical ramifications.</article-title> <source>Psychonomic Bulletin &amp; Review</source>, <volume>25</volume>(<issue>1</issue>), <fpage>35</fpage>–<lpage>57</lpage>. <pub-id pub-id-type="doi">10.3758/s13423-017-1343-3</pub-id><pub-id pub-id-type="pmid">28779455</pub-id></mixed-citation></ref>
<ref id="r37"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name name-style="western"><surname>Wilson</surname>, <given-names>B. M.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Wixted</surname>, <given-names>J. T.</given-names></string-name></person-group> (<year>2018</year>). <article-title>The prior odds of testing a true effect in cognitive and social psychology.</article-title> <source>Advances in Methods and Practices in Psychological Science</source>, <volume>1</volume>(<issue>2</issue>), <fpage>186</fpage>–<lpage>197</lpage>. <pub-id pub-id-type="doi">10.1177/2515245918767122</pub-id></mixed-citation></ref>
</ref-list><fn-group><fn fn-type="conflict">
<p content-type="fn-title">The authors have declared that no competing interests exist.</p></fn><fn fn-type="financial-disclosure">
<p content-type="fn-title">The authors have no funding to report.</p></fn></fn-group>
<bio id="bio1">
<p><bold>Geoffrey Schweizer</bold> holds a PhD in psychology and is currently an associate professor at Heidelberg University’s Department of Sport Psychology in Heidelberg, Germany. His research focusses on judgment and decision making in sports and on nonverbal behavior. Furthermore, he is interested in methodological and meta-scientific questions.</p>
</bio>
<bio id="bio2">
<p><bold>Maximilian Köppel</bold> is a PhD candidate in exercise physiology/sport science, investigating the implementation of exercise in cancer patients at the National Center for Tumor Diseases Heidelberg. Besides examining the beneficial effects of exercise on health, he is interested in statistical methods and methodological questions which can further improve the scientific discourse of his research area.</p>
</bio>
	<sec sec-type="supplementary-material" id="sp1"><title>Supplementary Materials</title>
		<table-wrap position="anchor">
			<table frame='void' style="background-#f3f3f3">
				<col width="60%" align="left"/>
				<col width="40%" align="left"/>
				<thead>
					<tr>
						<th>Type of supplementary materials</th>
						<th>Availability/Access</th>
					</tr>
				</thead>
				<tbody>
					<tr>
						<th colspan="2">Code</th>						
					</tr>
					<tr>
						<td>Annotated R code for all simulations.</td>
						<td><xref ref-type="bibr" rid="r31">Schweizer &amp; Köppel (2025b)</xref></td>
					</tr>
					<tr>
						<th colspan="2">Material</th>						
					</tr>
					<tr>
						<td>a. Appendix A: Replication projects’ inclusion of interactional hypotheses.</td>
						<td><xref ref-type="bibr" rid="r30">Schweizer &amp; Köppel (2025a)</xref></td>
					</tr>
					<tr>
						<td>b. Appendix B: Simulations.</td>
						<td><xref ref-type="bibr" rid="r30">Schweizer &amp; Köppel (2025a)</xref></td>
					</tr>					
				</tbody>
			</table>
		</table-wrap>		
	</sec>		
	
			

<ack>
<p>The authors have no additional (i.e., non-financial) support to report.</p>
</ack>
</back>
</article>
