I have long been interested in the use of general mental ability (GMA; intelligence) tests for personnel selection decisions. There is ample evidence demonstrating the superiority of GMA vis-à-vis other predictor constructs (e.g., personality, interests) for the prediction of important workplace outcomes such as job and training performance (Scherbaum, Goldstein, Yusko, Ryan, & Hanges, 2012; Schmidt & Hunter, 1998). However, I have never thought that we have reached the point that we may not need additional research on the validity of GMA for job performance. In fact, I have always thought that we need more research on this, given some important research gaps discussed later. Furthermore, unarguably, better predicting job performance has been and will be regarded as one of the most important research questions among industrial and organizational (I/O) psychologists and GMA will always play a crucial role in predicting job performance. Below, I discuss three major areas where I call for more research with regards to the relationship between GMA and job performance after briefly reviewing its current status of knowledge.
Cumulative research evidence indicates that GMA (manifested through the abilities to learn, reason, and solve problems) is the single best predictor of job (and training) performance and that its validity increases as the complexity level (in terms of information processing) of the job in question increases (Schmidt & Hunter, 1998). By synthesizing eight independent, minimally overlapping meta-analyses conducted in North America and Europe, Schmidt, Shaffer, and Oh (2008) showed that the mean of meta-analytic operational validity estimates () of GMA in predicting overall job performance is as high as .65 (.78, .61, and .55 for high, medium, and low complexity jobs, respectively).i For training performance, the mean of meta-analytic operational validity estimates () of GMA is as high as .67 (.80, .69, and .56 for high, medium, and low complexity jobs, respectively). Furthermore, in a survey of 85 editorial board members from four top journals in human resource management (e.g., the Journal of Applied Psychology and Personnel Psychology), Rynes, Giluk, and Brown (2007, p. 989) found that the importance of using GMA as a personnel selection tool was voted as the most fundamental finding in human resource management research that all practicing HR managers should know about. Three areas where more research on the relationship between GMA and job performance is urgently needed are now discussed below.
1. We Have Done Relatively Little Research on the Relationship between GMA and Non-Task Performance Criteria.
In examining the relationship between GMA and job performance, we have almost exclusively focused on “task” performance (or overall job performance). Regrettably, despite the importance of non-task performance (organizational citizenship behavior [OCB], counterproductive work behavior [CWB]) in today’s fast changing and team-based workplace, we have not yet conducted sufficient research on the relationships between GMA and “non-task” performance criteria. In particular, one may ask this question given the expanded, multi-dimensional criterion domain of job performance (Rotundo & Sackett, 2002): “Is the validity of GMA for non-task performance also as strong as that of GMA for task performance?” Earlier, Borman and Motowidlo (1993) argued that “the major source of variation in contextual (non-task) performance, however, is not proficiency, but volition and predisposition… predispositional variables represented by personality characteristics” (p. 74). Many I/O psychologists often (mis)interpret this as indicating that GMA is not a valid predictor of non-task performance criteria. However, Salgado (1999, p. 10), in his review of personnel selection research done between 1991 and 1997, called for a systematic, empirical test of this argument: “A future line of research will be to check Borman and Motowidlo’s (1993) suggestion that cognitive abilities predict task performance and not contextual performance”. However, it is fair to state that we have not yet responded to his call and available research findings are rather mixed. For example, Motowidlo and Van Scotter (1994) found, based on 174 U.S. Air Force mechanics, that GMA is more highly (though still modestly) related to contextual performance (r = .15) than to task performance (r = -.01).ii Van Scotter and Motowidlo (1996), using two independent samples of U.S. Air Force mechanics, found that GMA (Ns = 857 – 873) is not related either to task performance (for both samples; rs = -.06 and -.04, respectively) or to contextual performance (interpersonal facilitation for the first sample and job dedication for the second sample; rs = -.05 and -.01, respectively).iii Indeed, it is odd that GMA was not related to job performance (in particular, task performance) in Van Scotter and Motowidlo (1996) because both Project A (McHenry et al., 1990) and Schmidt, Hunter, and Outerbridge (1986) found GMA to have strong validity for job performance based on large samples in the same, military setting. In particular, in a Project A result, McHenry et al. (1990) reported the moderate, yet meaningful, operational validity () estimates of GMA for three non-task performance criteria (.31, .16, and .20 for Effort and Leadership, Personal Discipline, and Physical Fitness and Military Bearing, respectively; on average .22); the operational validities for task performance categories were found to be much stronger as expected (McHenry, Hough, Toquam, Hanson, & Ashworth, 1990, Table 4).iv
Recently, Dilchert, Ones, Davis, and Rostow (2007) found that the operational validity () of GMA for CWB objectively measured is -.33 based on a large police officer applicant sample (N = 816).v It may seem that this study clearly shows that GMA is important for predicting CWB. However, it should be noted that the relationship between GMA and CWB may be more complicated, given the possibility that highly intelligent employees are less likely to get caught even if they engage in some wrongdoings (e.g., “catch me if you can”). That is, highly intelligent employees may engage in CWB more often (given that they are smart enough not to get caught) or less often (given that they better anticipate the negative consequence of their wrongdoings; Dilchert et al., 2007) than less intelligent employees.
Except for these studies mentioned above, there is surprisingly little published research that relates GMA to both task and non-task performance simultaneously in the same sample. Thus, it seems fair to say that past research findings are at least inconclusive about the validity of GMA with regards to non-task performance criteria. Accordingly, more “theoretical as well as empirical” research on the relationships of GMA to both task and non-task performance is necessary in order to establish a more complete understanding of the roles that GMA plays in predicting the expanded criterion domain of job performance. Research along this line should also examine potential moderators (e.g., job complexity in terms of “emotional demands” or “interpersonal interaction”) and mediators (e.g., contextual/teamwork knowledge) of the relationships. Another research question worth an empirical examination is the interactive effect of GMA and personality on OCB or CWB – note that this interaction was not supported in predicting task and overall job performance in prior research (e.g., Mount, Barrick, & Strauss, 1999). It may be plausible that intelligent yet dishonest (disagreeable) individuals are more likely to engage in CWB than others.
2. We Do Not Yet Know Whether GMA Is the Best Predictor of Job Performance Outside North America and Europe.
Herriot and Anderson (1997, p. 28) rightfully noted: “The findings from [North American] meta-analyses have been unreservedly cited by personnel psychologists in other countries and appear to have been unquestioningly accepted as being generalizable to different national contexts. Social, cultural, legislative and recruitment and appraisal differences have been overlooked… These findings may indeed be transferable to other countries, but then again they may not be, given the pervasive cultural differences.” Because GMA is an all-purpose tool, it seems there can be no possibility that it is invalid for any of the job performance criteria in any culture. However, I cannot agree more with Herriot and Anderson (1997) that it is better to have such (cross-cultural validity generalization) evidence. Given lack of such evidence, this is a legitimate research question: “Do we know whether GMA is the single best predictor of job performance in non-Euro-American cultures?” In particular, we (I/O psychologists in Euro-American countries) do not currently have systematic validity generalization evidence for GMA from Asian countries (e.g., China, Taiwan, Singapore), recently emerging economies comprising BRIC (Brazil, Russia, India, and China), and Africa. One exception is an unpublished small-scale meta-analytic study by Oh (2009); based on three South Korean employee samples, he found that the operational validity estimate () of GMA was .53 in predicting job performance. Although I do not think GMA is invalid in other cultures, I urge that more validation studies be conducted in these cultures given the urgency in the globalization era that requires evidence-based cross-cultural understandings in every corner of human resource management (Arvey, Bhagat, & Salas, 1991). If GMA is found to be less or more valid outside Euro-American countries, we also need to seek an explanation probably by considering cross-cultural differences in selection and performance management practices, labor market conditions, and social values.
3. Many Practitioners Do Not Know That GMA Is the Best Predictor of Job Performance.
Rynes, Colbert, and Brown (2002) sent a survey to 5,000 Society for Human Resource Management (SHRM) members whose title was at the managers level and above. These respondents not only occupied an important role in designing and implementing HR practices but also had on average 14 years of work experience in HR. Rynes et al. (2002) asked them one question relevant to this paper: “Do companies that screen job applicants for values have higher performance than those that screen for intelligence?” The answer to this question is (definitely) no! Shockingly, however, 57% of respondents said “yes” to the question. That is, more than half the respondents did not know the most fundamental evidence based on over 100 years of research findings in personnel selection; namely, that intelligence (or GMA) is the single best predictor of job performance. Given the respondents’ high-level HR positions and considerable experience in HR, I speculate that the percentage of wrong answer would be even higher among less experienced HR staff. Relatedly, Rynes, Giluk, and Brown (2007) also found that only four articles had been published about the use of GMA in applied settings, in three major practitioner and bridge journals (zero of 785 articles in HR Magazine; two of 168 articles in Human Resource Management; and two of 537 articles in Harvard Business Review) between 2000 and 2005. Accordingly, I believe that there is a considerable gap between scientific findings and relevant practices in the area of personnel selection or staffing (Le, Oh, Shaffer, & Schmidt, 2007; Rynes et al., 2002; Rynes et al., 2007), where many HR managers are unaware of (or do not believe) the most fundamental research evidence and, as a result, likely to fail to use a valid employment selection procedure. This clearly shows that we, I/O psychologists, should do a better job at disseminating our research findings to practitioners although they may have done a good job in their academic community (Rynes et al., 2007; Scherbaum et al., 2012). Accordingly, we, I/O psychologists, should (re)direct our attention to realizing the scientist-practitioner model and evidence-based human resource management (Le et al., 2007; Rynes et al., 2002; Rynes et al., 2007).
There are two other areas in personnel selection I would like I/O psychologists to pay attention to in the near future. First, what we currently know about how to select for high performance is exclusively based on employees, and we do not yet know whether these findings are generalizable to executives. Given the critical roles that executives play for the success and failure of an organization, we, I/O psychologists, should pay more attention to executive selection. This may be a more cost-effective and preemptive way to reduce agency costs than compensation-based, prescriptive and costly interventions. Second, our current knowledge about personnel selection is based on the relationship between “individual” characteristics (including GMA) and “individual” performance, so we do not know whether this relationship will maintain (i.e., be homologous) at the unit or organizational levels. Thus, personal selection research should incorporate multi-level modeling principles and more studies should be conducted at the supra-individual levels.
In summary, I argue that we, I/O psychologists, still need more empirical and bridge research on the relationship between GMA and job performance, particularly focusing on non-task performance, non-Euro-American contexts, and how to close the widest science-practice gap in employee selection among various human resource management areas (i.e., how to realize evidence-based employee selection). Additionally, we also need more research on executive selection as well as more research that applies multi-level modeling principles to traditional single-level validation research. I believe that the prospects for additional research on the relationship between GMA and job performance are still rosy, not bleak.