once yet again for the sake of the tenaciously brainless: do you understand how otherwise sound sociological data is applied in a wide variety of areas? Clearly not...data recovery and analysis constitutes a predictive model to the extent that it can be readily applied to a pattern of social and governmental dispensations...the data also clearly demystifies the all-too often obscure--and equally obscuring---interiors of the "Black experience in America." To the extent that this subject has been consciously distorted and weaponized by liberal media misinformation the data eliminates the misinformation with the governing efficiency of a blinding light...it amazes me at this point that I even reply to such a convoluted dullard like you...initially there was at least a modicum of lite entertainment, however to the extent that you as impossibly one-dimensional as might be encountered anywhere the impulse to respond is no longer even mildly amusing...you are inexorably 99% swagger and 1% thinking...and even that generous estimate of 1% may be in doubt...
And that explains what as far as a practical use or application ...
If you want to say it keeps black idiots who demonstrate the same stupid behavior you do in assigning any value to racially based garbage at bay ... Well, it obviously hasn't worked at keeping you or the other idiots from promoting race based bullshit.
If the reason you do it is to protect yourself from nonsense ... Then why the fuck would you promote more nonsense?
That's what I mean ... There is no practical application ... You idiots (black, white or green) just need to unfuck yourselves and let it go ...
It is practical for a fireman to use fire in order to put a fire out.
It isn't practical for a bunch of firemen to run around setting fires for no other reason than they are going to have to put them out.
Put the matches away before you hurt yourself nit-wit.
.
IQ has played a prominent part in developmental and adult psychology for decades. In the absence of a clear theoretical model of internal cognitive functions, however, construct validity for IQ tests has always been difficult to establish. Test validity, therefore, has always been indirect, by correlating individual differences in test scores with what are assumed to be other criteria of intelligence. Job performance has, for several reasons, been one such criterion. Correlations of around 0.5 have been regularly cited as evidence of test validity, and as justification for the use of the tests in developmental studies, in educational and occupational selection and in research programs on sources of individual differences. Here, those correlations are examined together with the quality of the original data and the many corrections needed to arrive at them. It is concluded that considerable caution needs to be exercised in citing such correlations for test validation purposes.
IQ has now been used as a measure of cognitive functioning for over a century. It has played a prominent part in developmental studies in many ways: as an index of normal development; for clinical diagnostics; as a descriptor of individual differences in cognitive ability; as explanation for differences in achievement or success in the world; as a predictor of future success as in school, training and occupational selection; and as an index for exploring causes of individual differences in cognitive ability. For example, it is argued that the current search for associations between molecular genetic variations and IQ “will transform both developmental psychology and developmental psychopathology” (Plomin & Rutter,
1998Plomin, R. & Rutter, M. (1998). Child development, molecular genetics, and what to do with genes once they are found. Child Development, 69, 1223–1242. doi:10.2307/1132371
[Crossref],
[PubMed],
[Web of Science ®],
[Google Scholar], p. 1223; see also Plomin,
2013Plomin, R. (2013). Child development and molecular genetics: 14 years later. Child Development, 84, 104–120. doi:10.1111/j.1467-8624.2012.01757.x
[Crossref],
[PubMed],
[Web of Science ®],
[Google Scholar]). Likewise, Kovas, Haworth, Dale, and Plomin (
2007Kovas, Y., Haworth, C. M., Dale, P. S. & Plomin, R. (2007). The genetic and environmental origins of learning abilities and disabilities in the early school years. Monographs of the Society for Research in Child Development, 72, vii, 1–144.
[Crossref],
[PubMed],
[Google Scholar]) say that their conclusions on the heritability of IQ “have far-reaching implications for education and child development as well as molecular genetics and neuroscience” (p. vii). Clearly, a lot hinges on the validity of the test, especially as such studies are very expensive.
The validity of an IQ test—or what it actually measures—on the other hand, has always been a difficult subject. Since Galton in the 1880's (
1883Galton, F. (1883). Inquiry into human faculty and its development. London, England: Macmillan
[Crossref],
[Google Scholar]) and Spearman (
1927Spearman, C. (1927). The abilities of man. London, UK: Macmillan.
[Google Scholar]) a little later, it has been widely assumed that the test measures “intelligence,” commonly referred to as “general cognitive ability,” or
g. The identity of that ability, however has never been agreed; its function has only been characterized metaphorically as a kind of pervasive cognitive energy, power or capacity, by analogy with physical strength. In consequence, measuring it has always been indirect, creating perpetual debate and controversy about the validity of the tests. This article is about such validity.
Validity of IQ Tests
In scientific method, generally, we accept external, observable, differences as a valid measure of an unseen function when we can mechanistically relate differences in one to differences in the other (e.g., height of a column of mercury and blood pressure; white cell count and internal infection; erythrocyte sedimentation rate (ESR) and internal levels of inflammation; breath alcohol and level of consumption). Such measures are valid because they rely on detailed, and widely accepted, theoretical models of the functions in question. There is no such theory for cognitive ability nor, therefore, of the true nature of individual differences in cognitive functions. A number of analyses of the inter-correlations of aspects of test scores have produced theories of the
statistical structure of score patterns, as in the Cattell-Horn-Carroll theory (see McGrew,
2005McGrew, K. S. (2005). The Cattell-Horn-Carroll theory of cognitive abilities: Past, present, and future. In D. P. Flanagan J. L. Genshaft & P. L.Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (pp. 136–182). New York, NY: Guilford.
[Google Scholar]); but this is not the same thing as detailed characterization of the function itself. Accordingly, as Deary (
2001Deary, I. J. (2001). Intelligence: A very short introduction. Oxford, England: Oxford University Press.
[Crossref],
[Google Scholar]) put it, “There is no such thing as a theory of human intelligence differences—not in the way that grown-up sciences like physics or chemistry have theories” (p. ix).
The alternative strategy has been to attempt to establish test validity indirectly, by comparison of a proposed measure with what is considered to be some other expression of intelligence. Galton (
1883Galton, F. (1883). Inquiry into human faculty and its development. London, England: Macmillan
[Crossref],
[Google Scholar]) chose differences in social esteem; subsequently, scholastic performance and age-related differences were chosen. Typically, in constructing a test, cognitive problems or items thought to engage aspects of intelligence are devised for presentation to testees in trials. Those items on which differences in performance agree with differences in the criterion are put together to make up an intelligence test. There are many other technical aspects of test construction, but this remains the essential rationale. Thus, nearly all contemporary tests, such as the Stanford-Binet or the Woodcock-Johnson tests, rely on correlations of scores with those from other IQ or achievement tests as evidence of validity.
However, the question of whether such procedure measures the fundamental cognitive ability (or
g)assumed has continued to haunt the field. Measuring what we think is being measured is known as the construct validity of the test—something that cannot, by definition, be measured indirectly. Generally, a test is valid for measuring a function if (a) the function exists and is well characterized; and (b) variations in the function demonstrably cause variation in the measurement outcomes. Validation research should be directed at the latter, not merely at the relation between what are, in effect, assumed to be independent tests of that function (Borsboom, Mellenberg, & van Heerden,
2005Borsboom, D., Mellenbergh, G. J. & van Heerden, J. (2005). The concept of validity. Psychological Review, 111, 1061–1071.
[Crossref],
[Web of Science ®],
[Google Scholar]).
It is true to say that various attempts have been made to correlate test scores with some cortical/physiological measures in order to identify cerebral “efficiency” as the core of intelligence. However, as Nisbett et al. (
2012Nisbett, R. E., Aronson, J., Blair, C., Dickens, W., Flynn, J., Halpern, D. F. & Turkheimer, E. (2012). Intelligence: New findings and theoretical developments. American Psychologist, 67, 130–159. doi:10.1037/a0026699
[Crossref],
[PubMed],
[Web of Science ®],
[Google Scholar]), in their review for the American Psychological Association, point out, such studies have been inconsistent:
Patterns of activation in response to various fluid reasoning tasks are diverse, and brain regions activated in response to ostensibly similar types of reasoning (inductive, deductive) appear to be closely associated with task content and context. The evidence is not consistent with the view that there is a unitary reasoning neural substrate. (p. 145)
Haier et al. (
2009Haier, R. J., Colom, R., Schroeder, D. H., Condon, C. A., Tang, C., Eaves, E. & Head, K. (2009). Gray matter and intelligence factors: Is there a neuro-g? Intelligence, 37, 136–144. doi:10.1016/j.intell.2008.10.011
[Crossref],
[Web of Science ®],
[Google Scholar]) likewise conclude after similar inconsistent results that “identifying a ‘neuro-
g’ will be difficult” (p. 136). Associations have also been sought between various elementary tasks such as reaction time and IQ test scores. These have been difficult to interpret because the correlations are (a) small (leaving considerable variance, as well as true causes, unexplained) and (b) subject to a variety of other factors such as anxiety, motivation, experience with equipment, and training or experience of various kinds such as video game playing (e.g., Green & Bavelier,
2012Green, C. S. & Bavelier, D. (2012). Learning, attentional control, and action video games. Current Biology, 22, R197–R206.
[Crossref],
[PubMed],
[Web of Science ®],
[Google Scholar]).
Accordingly, validation of IQ tests has continued to rely on correlation with other tests. That is, test validity has been forced to rely, not on calibration with known internal processes, but on correlation with other assumed expressions, or criteria, of intelligence. This is usually referred to as “predictive” or “criterion” validity. In almost all validity claims for IQ those criteria have been educational achievement, occupational level and job performance.
Predictive Validity of IQ
It is undoubtedly true that moderate correlations between IQ and those criteria have been reported. For example, in their recent review Nisbett et al. (
2012Nisbett, R. E., Aronson, J., Blair, C., Dickens, W., Flynn, J., Halpern, D. F. & Turkheimer, E. (2012). Intelligence: New findings and theoretical developments. American Psychologist, 67, 130–159. doi:10.1037/a0026699
[Crossref],
[PubMed],
[Web of Science ®],
[Google Scholar]) say “the measurement of intelligence—which has been done primarily by IQ tests—has utilitarian value because it is a reasonably good predictor of grades at school, performance at work, and many other aspects of success in life” (p. 2). But how accurate and meaningful are such correlations?
It is widely accepted that test scores predict school achievement moderately well, with correlations of around 0.5 (Mackintosh,
2011Mackintosh, N. J. (2011). Intelligence and its measurement: 1. History of theories and measurement of intelligence. In R. J. Sternberg& S. B. Kaufman (Eds.), The Cambridge handbook of intelligence (pp. 1–19). Cambridge, England: Cambridge University Press.
[Crossref],
[Google Scholar]). The problem lies in the possible self-fulfilment of this prediction because the measures are not independent. Rather they are merely different versions of the same test. Since the first test designers such as Binet, Terman, and others, test items have been devised, either with an eye on the kinds of knowledge and reasoning taught to, and required from, children in schools, or from an attempt to match an impression of the cognitive processes required in schools. This matching is an intuitively-, rather than a theoretically-guided, process, even with nonverbal items such as those in the Raven's Matrices. As Carpenter, Just, and Shell (
1990Carpenter, P. A., Just, M. A. & Shell, P. (1990). What one intelligence test measures: A theoretical account of the processing in the Raven Progressive Matrices Test. Psychological Review, 97, 404–431. doi:10.1037//0033-295x.97.3.404
[Crossref],
[PubMed],
[Web of Science ®],
[Google Scholar]) explained after examining John Raven's personal notes, “ … the description of the abilities that Raven intended to measure are primarily characteristics of the problems, not specifications of the requisite cognitive processes” (p. 408).
In other words, a correlation between IQ and school achievement may emerge because the test items demand the very kinds of (learned) linguistic and cognitive structures that are also the currency of schooling (Olson,
2005Olson, D. R. (2005). Technology and intelligence in a literate society. In R. J. Sternberg & D.Preiss (Eds.), Intelligence and technology: The impact of tools on the nature and development of human abilities (pp. 3–67). Hillsdale, NJ: Erlbaum.
[Google Scholar]). As Thorndike and Hagen (
1969Thorndike, R. L. & Hagen, E. P. (1969). Measurement and evaluation in psychology and education. New York, NY: Wiley.
[Google Scholar]) explained, “From the very way in which the tests were assembled [such correlation] could hardly be otherwise” (p. 325). Evidence for this is that correlations between IQ and school achievement tests tend to increase with age (Sternberg, Grigorenko, & Bundy,
2001Sternberg, R. J., Grigorenko, E. & Bundy, D. A.(2001). The predictive value of IQ. Merrill-Palmer Quarterly, 47, 1–41. doi:10.1353/mpq.2001.0005
[Crossref],
[Web of Science ®],
[Google Scholar]). And this is why parental drive and encouragement with their children's school learning improves the children's IQ, as numerous results confirm (Nisbett,
2009Nisbett, R. E. (2009). Intelligence and how to get it: Why schools and cultures count. New York, NY: Norton
[Google Scholar]; Nisbett et al.,
2012Nisbett, R. E., Aronson, J., Blair, C., Dickens, W., Flynn, J., Halpern, D. F. & Turkheimer, E. (2012). Intelligence: New findings and theoretical developments. American Psychologist, 67, 130–159. doi:10.1037/a0026699
[Crossref],
[PubMed],
[Web of Science ®],
[Google Scholar]).
Similar doubts arise around the use of occupational level, salary, and so on, as validatory criteria. Because school achievement is a strong determinant of level of entry to the job market, the frequently reported correlation (
r ∼ 0.5) between IQ and occupational level, and, therefore, income, may also be, at least partly, self-fulfilling (Neisser et al.,
1996Neisser, U., Boodoo, G., Bouchard, T. J. Jr., Boykin, A. W., Brody, N., Ceci, S. J., … Urbina, S.(1996). Intelligence: Knowns and unknowns. American Psychologist, 51, 77–101. doi:10.1037/0003-066X.51.2.77
[Crossref],
[Web of Science ®],
[Google Scholar]). Again, the measures may not be independent.
The really critical issue, therefore, surrounds the question of whether IQ scores predict individual differences in the seemingly more independent measure of job performance. Indeed, correlation of IQ scores with job performance is regularly cited as underpinning the validity of IQ tests. Furnam (
2008Furnam, A. (2008). Intelligence and cognitive abiloities at work. In S. Cartwright & C. L.Cooper (Eds.), Oxford handbook of personnel psychology (pp. 7–36). Oxford, England: Oxford University Press.
[Google Scholar]) probably reflects most views when he says that “there is a large and compelling literature showing that intelligence is a good predictor of both job performance and training proficiency at work” (p. 204). In another strong commentary, Kuncel and Hezlett (
2010Kuncel, N. R. & Hezlett, S. A. (2010). Fact and fiction in cognitive ability testing for admissions and hiring decisions. Current Directions in Psychological Science, 19, 339–345. doi:10.1177/0963721410389459
[Crossref],
[Web of Science ®],
[Google Scholar]) refer to “this robust literature” as “facts” (p. 342). Ones, Viswesvaran, and Dilchert (
2005Ones, D. S., Viswesvaran, C. & Dilchert, S.(2005). Cognitive ability in personnel selection decisions. In A. Evers N. Anderson, & O. Voskuijl(Eds.), The Blackwell handbook of personnel selection (pp. 331–353). Oxford, England: Blackwell Publishing.
[Google Scholar]) say that “Data are resoundingly clear: [measured cognitive ability] is the most powerful individual differences trait that predicts job performance … Not relying on it for personnel selection would have serious implications for productivity. There is no getting away from or wishing away this fact” (p. 450; see also Ones, Dilchert, & Viswesvaran,
2012Ones, D. S., Dilchert, S. & Viswesvaran, C.(2012). Cognitive ability. In N. Schmitt (Ed.), Oxford handbook of personnel assessment and selection (pp. 179–224). Oxford, England: Oxford University Press.
[Google Scholar]). Drasgow (
2012Drasgow, F. (2012). Intelligence and the workplace. In I. B. Weiner N. W. Schmitt & S.Highouse (Eds.), Handbook of psychology, industrial and organizational psychology. London, England: Wiley.
[Crossref],
[Google Scholar]) describes the correlation as “incontrovertible.” Hunter and Schmidt (
1983Hunter, J. E. & Schmidt, F. L. (1983). The economic benefits of personnel selection using psychological ability tests. American Psychologist, 38, 473–478.
[Crossref],
[Google Scholar]) even attached dollar value to it when they claimed that the U.S. economy (even then) would save $80 billion per year if job selection were to be universally based on IQ testing.
Unfortunately, nearly all authors merely offer uncritical citations of the primary sources in support of their statements (for exceptions see, for example, Wagner,
1994Wagner, R. K. (1994). Context counts: The case of cognitive ability testing for job selection. In R. J. Sternberg & R. K. Wagner (Eds.), Mind in context: Interactionist perspectives on human intelligence (pp. 133–151). Cambridge, England: Cambridge University Press.
[Google Scholar], and in the following sections). Instead of scrutiny of the true nature of the evidence, a conviction regarding a “large and compelling literature” seems to have developed from a relatively small number of meta-analyses over a cumulative trail of secondary citations (Furnham, 2008, p. 204). It seems important, therefore, to take a closer look at the quality of data and method behind the much-cited associations between IQ and job performance, and how they have been interpreted. The aim, here, is not to do an exhaustive review of such studies, nor to offer a sweeping critique of meta-analyses, which have many legitimate uses. Indeed, the approach devised by Schmidt and Hunter (
1998Schmidt, F. L. & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124, 262–274. doi:10.1037//0033-2909.124.2.262
[Crossref],
[Web of Science ®],
[Google Scholar]), which we go on to discuss, brought a great deal of focus and discipline to the area and we agree with Guion (
2011Guion, R. M. (2011). Assessment, measurement, and prediction for personnel decisions. Hillsdale, NJ: Lawrence Erlbaum.
[Google Scholar]) that it must be recognized as a major methodological advance. Rather our aim is to emphasize the care needed in interpretation of correlations when based on corrections to original data of uncertain quality and then invoked as evidence of IQ test validity.
Predicting Job Performance From IQ Scores
In contrast with the confidence found in secondary reports, even a cursory inspection of the primary sources shows that they are highly varied in terms of data quality and integrity, involving often-small samples and disparate measures usually obtained under difficult practical constraints in single companies or institutions. Their collective effect has mainly arisen from their combination in a few well-known meta-analyses. Hundreds of studies prior to the 1970s reported that correlations between IQ tests and job performance were low (approximately 0.2–0.3) and variable (reviewed by Ghiselli,
1973Ghiselli, E. E. (1973). The validity of aptitude tests in personnel selection. Personnel Psychology, 26, 461–477. doi:10.1111/j.1744-6570.1973.tb01150.x
[Crossref],
[Web of Science ®],
[Google Scholar]). These results were widely accepted as representative of the disparate contexts in which people actually work. Then, Schmidt and Hunter (
2003Schmidt, F. L. & Hunter, J. E. (2003). History, development, evolution, and impact of validity generalization and meta-analysis methods. 1975–2001. In K. R. Murphy (Ed.), Validity generalization: A critical review. Hove, England: Erlbaum.
[Google Scholar], for an historical account) quite reasonably considered the possibility that the large quantity of results were attenuated by various statistical artifacts, including sampling error, unreliability of measuring instruments, and restriction of range. They devised methods for correcting these artifacts and incorporating the studies into meta-analyses. The corrections doubled the correlations to approximately 0.5. Nearly all studies cited in favor of IQ validity are either drawn from the Schmidt and Hunter meta-analyses or from others using the correction methods developed for them.