Noah Carl Controversy: FAQ

Following the termination of my fellowship at St Edmund’s College, Cambridge, this FAQ responds to criticisms of my research and my collaborations

Noah Carl
17 min readMay 7, 2019

1. What is your response to the College’s statement explaining their decision to fire you?

I will be responding to this statement in due course. But for the moment, I am still receiving legal advice. Needless to say, my interpretation of events differs from that of the College.

2. Last December, 586 academics signed an open letter accusing you of “racist pseudoscience”. That many academics can’t all be wrong, can they?

Given that the open letter demonstrated a basic lack of understanding of the relevant science, it would seem that 586 academics can indeed all be wrong. For example, as Jeff McMahan pointed out in his comments for the first Quillette Editorial:

One passage in the open letter demands that the various institutions cited “issue a public statement dissociating themselves from research that seeks to establish correlations between race, genes, intelligence and criminality in order to explain one by the other.” This seems to imply that it is illegitimate to seek to explain any one of the four characteristics by reference to any one of the others, and thus that no aspect of intelligence can be explained by an individual’s genes. I would not trust the competence of anyone who endorses a claim that has that implication to judge the work of a candidate for a research fellowship.

And Professor McMahan is absolutely correct: the signatories of the open letter were calling for St Edmund’s College to “issue a public statement dissociating themselves” from research backed by overwhelming scientific evidence. In fact, the contribution of genes to variation in human intelligence has been widely accepted by psychologists since at least 1996, when the report ‘Intelligence: Knowns and Unknowns’ was published by the American Psychological Association (APA). This report, co-authored by Ulric Neisser and his colleagues in the aftermath of The Bell Curve debate, concluded that “a sizable part of the variation in intelligence test scores is associated with genetic differences among individuals”. Evidence for a genetic contribution to variation in human intelligence has only strengthened since the publication of the APA report.

3. Okay, so individual differences in intelligence might have a genetic component. But what about differences between groups — they couldn’t possibly have a genetic component, could they?

Contrary to the implications of the open letter, I have never actually done any original research on racial or population differences in intelligence. The only contribution I have made to this area of study is a research ethics paper arguing that “it cannot simply be taken for granted that, when in doubt, stifling debate around taboo topics is the ethical thing to do”. While this paper does not claim that genes do contribute to group differences in intelligence, it does entertain the possibility that they could contribute to such differences.

I consider this to be a perfectly defensible scientific position. We know that there are group differences in intelligence, both across countries, and between groups within a country. The question is why. And there is no good reason to rule out the possibility that genes do make some contribution to these differences. It may turn out that genes make zero contribution, or it may turn out that they make a contribution greater than zero. Deciding in advance that they make zero contribution is not science. It is proof by assertion. As James Flynn has noted, the hypothesis that genes contribute to group differences “is intelligible and subject to scientific investigation”. I trust 1 James Flynn a lot more than 586 petitioners.

4. But if genes contribute to group differences in intelligence, wouldn’t that mean the Nazis were right… or something?

No, it would absolutely not. First and most importantly, “political equality is a moral stance, not an empirical hypothesis”, as Steven Pinker has noted. Please read my research ethics paper for a longer explanation (I really would recommend it).

Second, from what we can tell, the Nazis actually opposed intelligence research. To quote from Heiner Rindermann’s recent book:

Contradicting common beliefs, National Socialists were opposed to intelligence research (Becker, 1938; Jaensch, 1938): in their view, intelligence research would represent a ‘supremacy of Bourgeoisie spirit’ (Jaensch, 1938, p. 2); intelligence measurement would be an instrument ‘of Jewry’ to ‘fortify its hegemony’ (p. 3); selection in schools according to intelligence would stand for a ‘system of examination of Jewish origin’ (p. 4), especially the concept of intelligence as a ‘one-dimensional dimension’ (p. 3) and ‘one common central factor’ (Becker, 1938, p. 24). Because people differ and therefore intelligence differs (p. 4) they called for an ‘intelligence measurement according to a national and typological point of view’ (p. 15); for Germans they asked for a measurement of ‘realism’, ‘conscientiousness’ and ‘actually of the character value of intelligence’. They were opposed to a measurement solely of ‘theoretical intelligence’, of ‘intellectualism’ (Becker, 1938, p. 22); instead they favoured ‘practical intelligence’ (p. 18)

So it seems that the Nazis’ opposition to intelligence research stemmed in part from their anti-Semitism. By the logic of my critics, this would imply that opposing intelligence research is racist.

5. Didn’t you go to a secret “eugenics conference” with “Neo-Nazi links”?

No. But I did attend and speak at a meeting of individual differences researchers called the London Conference on Intelligence (LCI). This conference was widely mischaracterised in the media, and some of us who attended responded via a peer-reviewed correspondence in the journal Intelligence. To quote from our correspondence:

Contrary to allegations, the annual LCI conference was not secret but invitation only (like many small conferences). The attendees had a range of theoretical orientations and research interests, and their attendance does not imply agreement with the views of all of the other attendees, be they political, moral or scientific. The conference program covered many topics related to the fields of intelligence and personality research and there was no exclusive focus on ‘eugenics’ or IQ differences among populations (although both issues were discussed)

Furthermore:

The overwhelming preponderance of talks dealt exclusively with data or substantive theory. Moreover 48% of talks were associated with (either based on or in most cases yielding) ‘mainstream’ publications over four years. Thus, LCI’s productivity is comparable to that of conferences in biomedical science — a field in which, according to one meta-analysis, 44.5% of conference presentations yield publications (Scherer, Langenberg, & von Elm, 2008). Finally, the speakers originated from 13 different countries in total, including Japan, China, Brazil and Slovakia, thus the conference can reasonably be described as cosmopolitan as opposed to “white supremacist” in character.

So the LCI’s productivity, measured by the percentage of presentations associated with publications in ‘mainstream’ journals, was about average for conferences in the field of biomedical science:

The ‘mainstream’ journals in which articles have appeared include (in no particular order) Intelligence, Personality and Individual Differences, Learning and Individual Differences, Frontiers in Psychology, Frontiers in Human Neuroscience, Journal of Experimental Psychology: General, Evolutionary Psychological Science, Twins Research and Human Genetics, Cortex, and Evolutionary Behavioral Sciences. Academic monographs that either formed the basis of presentations or incorporated results presented at LCI have been published with Cambridge University Press, Palgrave Macmillan, and as part of the Journal of Social, Political, and Economic Studies occasional monograph series.

Hence if the LCI constitutes “racist pseudoscience”, then all the above-mentioned journals and publishers would presumably fall into the same category. Either our critics are mistaken or there’s an awful lot of “racist pseudoscience” out there…

I am not aware of any attendee at the conference who holds “Neo-Nazi” views.

6. But don’t some of the researchers who attended the LCI hold “far-right” views?

Some of them may indeed hold such views. However, as a rational adult in control of his faculties, I am capable of interacting with people who hold different views from me. (I realise this may come as a shock to some readers.) Nobody at the conference tried to coax me into adopting any kind of “far-right” agenda.

7. Aren’t you a member of the “alt-right”?

No, I am definitely not a member of the “alt-right”. My political views vary from one issue to another — on some issues I take more left-wing stances and on some issues more libertarian stances — but overall they reflect those of a moderate conservative.

8. Haven’t you published several papers in a non-peer reviewed “pseudojournal”?

I have published several papers in the OpenPsych journals, which use a form of open peer review. This review system is clearly laid out on the journals’ website. Hence it is false to claim that the journals are not peer-reviewed. Nor can it be claimed that any attempt has been made to conceal the journals’ review system: upon reaching the homepage, you only have to click ‘About’ to find out how it works. In addition, the names of the reviewers and a link to the review thread are provided next to every published paper.

Open peer review is one of the key principles of the ‘open science’ movement; the others being ‘open methodology’, ‘open source’, ‘open data’, ‘open access’, and ‘open education’. The essence of this system is that reviewers’ names are disclosed to authors. Open peer review systems vary in several different ways, such as whether peer review happens before or after publication; whether reviewers are blind to authors’ names; whether editors have discretion to reject papers; and whether reviewers’ comments are published alongside the article itself. Perhaps the most radical form is ‘post-publication peer review’, where open peer review takes place following instant publication. This is the system used by platforms such as F1000Research and The Winnower.

Of course, there are advantages and disadvantages to every system of peer review. For example, the main advantage of double-blind peer review is that, assuming anonymity is preserved, reviewers cannot be influenced by irrelevant characteristics of the authors themselves (e.g., race, gender, personal connections). However, an obvious disadvantage of double-blind peer review is that authors cannot hold reviewers to account for biased or incompetent reviews. The problem of unaccountability is particularly serious in ‘controversial’ areas of research because — due to the massive left-wing skew of the social sciences — papers often get rejected for ideological reasons. In addition, as the use of pre-print archives (e.g., OSF, arXiv, SSRN etc.) becomes more and more common, author anonymity will be increasingly difficult to preserve.

Since starting my DPhil in 2013, I have submitted manuscripts to more than 20 different journals, and served as a reviewer for about the same number. Based on this experience, I would say that OpenPsych ranks about average in terms of rigour. I have certainly dealt with journals where the review system was more rigorous than at OpenPsych, but I have also dealt with journals where it was less rigorous. For example, one of my papers in a ‘mainstream’ journal was accepted for publication by a single reviewer following a single round of review. Hence even if you remain unimpressed by the quality of the reviews at OpenPsych, dismissing them as “pseudojournals” would imply doing the same to a substantial number of ‘mainstream’ journals as well.

Finally, we wrote an Editorial responding to many of the specific criticisms of OpenPsych, but almost nobody has attempted to engage with it.

9. But aren’t all the reviewers at OpenPsych “hereditarians”?

The term ‘hereditarian’ presumably refers to someone who believes that genes make a non-zero contribution to group differences in intelligence. This does not seem to me to be an unreasonable scientific position. The alternative is to believe that genes do not make any contribution to group differences in intelligence. Note that ‘hereditarian’ does not mean someone who believes that genes explain 100% of all group differences (a highly untenable position). Neither Charles Murray and Richard Herrnstein, nor Philippe Rushton and Arthur Jensen, believed that.

I am not aware of how many of the reviewers at OpenPsych are “hereditarians”, but even if all of them were, I would not consider this to be a particularly egregious failing on the part of the journals. It would simply mean that every reviewer believes that genes explain some proportion of group differences in intelligence that is greater than zero. And it would not prevent there from being considerable diversity of opinion as to the extent of that genetic contribution. For example, some reviewers might believe it was closer to 10%, while others might believe it was closer to 50%.

Expert surveys reveal that a substantial percentage of individual differences researchers are “hereditarians” in the sense given above (i.e., believing that genes explain >0% of group differences in intelligence). Hence it would not be particularly remarkable if most of the reviewers at OpenPsych were in fact “hereditarians”.

10. But aren’t all the reviewers at OpenPsych “far-right”?

I cannot speak for every reviewer at OpenPsych, but I know that my own political views are not “far-right”. I also happen to know that the views of several other reviewers are not “far-right” either.

11. But don’t the OpenPsych journals have an incredibly controversial editor named Emil Kirkegaard?

I will let Emil Kirkegaard speak for himself. But suffice it to say that he has demonstrated competence in the area of individual differences research by publishing in a variety of ‘mainstream’ journals, including Intelligence, Journal of Individual Differences, Journal of Intelligence, Psych and Evolutionary Behavioural Sciences. I would also reiterate that it should be possible for scholars to collaborate with people who hold different views from themselves.

12. Didn’t you publish a paper claiming that “racist stereotypes” are “rational”?

No. I published a paper showing that “in the UK, net opposition to immigrants of different nationalities correlates strongly with the log of immigrant arrests rates”. And I concluded by noting that the study’s findings were “consistent with a model of immigration preferences in which individuals’ expressed support or opposition to immigrants from different nationalities is informed by rational beliefs about the respective characteristics of those immigrant groups”.

It has been observed in a number of social surveys that public attitudes to immigration vary substantially across national-origin groups. This is an empirical regularity in need of explanation. One possibility is that public support or opposition to different national-origin groups is partly informed by rational beliefs about the respective characteristics of those groups. In other words, it is possible that the public are more opposed to certain national-origin groups, at least in part, because those national-origin groups have average characteristics which the public deems less desirable (e.g., higher crime rates). My study provided tentative evidence that this is the case, although it is far from definitive.

Hence I was not claiming that it is rational for people to hold “racist stereotypes”. Rather, I was using the word ‘stereotype’ (specifically, ‘consensual stereotype’) in the technical sense that it is used in psychology, namely to refer to people’s average beliefs about the respective characteristics different groups. Moreover, my conjecture that “public beliefs about the relative positions of different immigrants may be reasonably accurate” is hardly revolutionary, given that there is already a large literature on stereotype accuracy. According to Lee Jussim and his colleagues, “stereotype accuracy is one of the largest and most replicable findings in social psychology”.

13. Referring to your paper, didn’t an “external reviewer” claim that “research this bad should never be published in any form”?

The person who made that claim was not an “external reviewer” (I’m not even sure what that would be), but rather a Professor of Geography at McMaster University, who wrote about my paper on his personal blog (which he is perfectly entitled to do). Professor Yiannakoulias largely misunderstood the analysis, and I responded to his criticisms more than a year ago. Moreover, as Jonatan Pallesen pointed out, Professor Yiannakoulias’ criticisms are undermined by the fact that, if true, they would apply to other analyses that he himself has carried out.

While I have no doubt that Professor Yiannakoulias is highly eminent in his field, he does not appear to possess any expertise in the relevant subject matter. By contrast, the three scholars who did review the paper have all published extensively in psychology.

14. Okay, but in your paper n = 23. I mean, come on!

This point is to a large extent already answered in my response to the McMaster Professor’s criticisms. But I will add a bit more here in the interest of public engagement. Using small samples is very common when doing aggregate level analyses, including within the psychology of estimation and belief-formation (e.g., see Figure 2C in this paper, n = 20.) An important thing to remember is that measurement error is typically much lower when doing aggregate level analysis, meaning that n = 23 at the aggregate level is not equivalent to n = 23 at the individual level. For example, the YouGov poll from which the group-level means in my study were computed had a sample size of n = 1,668.

Moreover, if one objects to using small samples when doing aggregate level analysis, then one should dismiss a lot of cross-country and cross-regional studies too. Such a stance is not totally indefensible, but it would entail throwing out a large number of studies published in ‘mainstream’ journals.

15. Didn’t you publish a paper using data from an “Islamophobic website”?

To date, I have written three papers about Islamist terrorism: two published in OpenPsych and one uploaded to the OSF. All three of these papers utilised data from a website called TheReligionOfPeace.com (read the FAQ). I’m not exactly sure what ‘Islamophobic’ means, but it would certainly be accurate to describe the website as hostile to Islam. Yet this alone does not constitute a scientific basis for discarding data. Concerns about a given measure are grounds for comparing it with other measures (a practice known as construct validation), not for ignoring it outright. After all, the vast majority of social science is based on imperfect data.

I showed that two measures of Islamist terrorism computed using TheReligionOfPeace.com’s data were strongly correlated with two other measures computed using data from completely different sources (Europol, and the UK government’s Foreign and Commonwealth Office). Moreover, in my latest paper, I acknowledged the shortcomings of data from TheReligionOfPeace.com, and then went through the entire list, eliminating all those incidents that did not meet a strict definition of Islamist terrorism. Overall, the results I obtained were similar to those I had obtained in my original analysis: the two new measures of Islamist terrorism computed using TheReligionOfPeace.com’s data were again strongly correlated with the two other measures.

As I have noted previously, I do not consider these papers to be “Islamophobic” (please read them for yourself). Indeed, examining the relationship between the presence of Muslims and the incidence of Islamist terrorism is now a lively area of scholarly research (see the brief literature review on p. 3 of my most recent paper.) Of course, it goes without saying that only a small minority of Muslims are terrorists, and not all terrorists are Muslims.

16. But surely data from an anti-Islam website would be “biased”?

I have seen this criticism made countless times, but the form of bias is never specified. For the analyses I reported, some forms of bias matter and other forms of bias don’t. The most plausible form of bias in data from the TheReligionOfPeace.com is a high rate of false positives: the compilers of the list might have been inclined to overstate the number of attacks in order to make the problem of Islamist terrorism seem worse than it really is. Yet the main purpose of my papers was to examine variation in the relative amount of Islamist terrorism across countries, not to accurately estimate the overall risk of Islamist terrorism in one particular country. (Indeed, most of the statistics I reported were standardized.) Hence, a high rate of false positives does not necessarily matter for my analyses.

By way of example, suppose you were interested in examining variation in the relative number of diagnoses of some medical condition at different hospitals. And suppose every hospital uses the same piece of medical equipment to diagnose that condition, which has a certain false positive rate greater than zero (i.e., some percentage of the time it counts healthy people as having the condition). In that case, one could still attempt to model the relative number of diagnoses across different hospitals. Such an analysis might identify predictors such as the average age of the population, the accessibility of the hospital, and the level of socio-economic deprivation.

Going back to TheReligionOfPeace.com’s data, bias due to false positives is only a problem if the magnitude of that bias is correlated with the independent variables (i.e., percentage of Muslims in the population, and measures of military intervention). Now, it is not implausible that such bias could be correlated with the independent variables. For example, countries with a higher percentage of Muslims might have a higher number of incidents of non-Islamist violence that were incorrectly classified as incidents of Islamist violence. The issue then becomes whether the magnitude of that bias is large relative to the true number of incidents. If it is reasonably small, then the measures might still provide an accurate gauge of Islamist terrorism.

As already mentioned above, I found that the measures computed using TheReligionOfPeace.com’s data were strongly correlated with two other measures computed using data from completely different sources, and that results were similar when incidents that did not meet a strict definition of Islamist terrorism were eliminated. This provides some evidence that any bias due to false positives was reasonably small.

17. But wasn’t TheReligionOfPeace.com mentioned in the manifesto of far-right terrorist Anders Breivik?

Yes, it was. But this is pure guilt-by-association, and cannot be taken seriously as a criticism. I checked in Anders Breivik’s manifesto, and he also mentions The Journal of African History, Journal of Asian and African Studies, The Times, The Telegraph, The Guardian, and Wikipedia.

18. Didn’t you publish a paper in the “white supremacist” journal Mankind Quarterly?

I did publish a short comment in Mankind Quarterly, at the request of the Editor. However, I do not believe it is accurate to describe Mankind Quarterly as a “white supremacist” journal. The basis of this claim seems to be that most of the journal’s founders supported racial segregation and eugenics. Note that one of Mankind Quarterly’s founders was the fascist Corrado Gini, who invented the ‘Gini coefficient’, an extremely important concept in the analysis of income inequality. Another of the journal’s founders was the segregationist Henry Garrett, who was also President of the American Psychological Association (APA). By the logic of my critics, this would imply that the Gini coefficient is a “white supremacist” concept and that the APA is a “white supremacist” organisation.

Interestingly, a number of prominent scientific journals and scholarly societies were originally devoted to eugenics. For example, the journal Social Biology was originally called Eugenics Quarterly. The Galton Institute was originally known as the Eugenics Society of London. And The Society for Biodemography and Social Biology was originally known as the American Eugenics Society. Even the American Sociological Review ran an article titled ‘Development of a Eugenic Philosophy’ back in 1937. While I certainly do not consider myself a “eugenicist”, one should also be aware that ‘eugenics’ encompasses a range of interventions, including some that most people would not consider coercive at all.

Looking at the ‘About’ section on Mankind Quarterly’s website, there is no mention of “white supremacy”. While I cannot rule out that some of the journal’s recent contributors hold such views, it is noteworthy that the advisory board includes scholars from Egypt, Libya, Saudi Arabia, China, Taiwan, Malaysia, and Japan. Hence it would be rather surprising if members of the advisory board were seeking to promote “white supremacy”. Moreover, even if the journal could be described as having a ‘political slant’ (something I imagine the editors would strongly contest), there are many ‘mainstream’ journals with overtly political mission statements: Dialectical Anthropology, Capital & Class, Review of Radical Political Economics etc.

Furthermore, a number of highly distinguished scholars have published in Mankind Quarterly, including Ian Deary and James Flynn. Note that Professor Flynn is one of the foremost proponents of the view that environmental factors are sufficient to explain the black/white IQ gap. As a matter of fact, the journal’s openness to non-hereditarian perspectives is illustrated rather well by the content of the issue to which I contributed.

19. Haven’t certain fringe media outlets cited your papers?

Yes, I believe that is correct. However, there is generally no way of stopping such outlets from citing one’s work, and criticising me on this basis is tantamount to guilt-by-association. As a matter of fact, fringe media outlets frequently cite all kinds of scholarship, including papers published in leading journals like Science and Nature. Note that my work (which covers many different topics) has also been covered favourably by left-wing media outlets, such as The Guardian.

20. Okay, perhaps you’re not a “racist pseudoscientist” then. So why did St Edmund’s College fire you?

As noted above, I will address that question in due course. But for the moment, I am still receiving legal advice. If you’re an academic, you can support me by signing this statement in defence of academic freedom.

📝 Read this story later in Journal.

🌎 Wake up every Sunday morning to the week’s most noteworthy stories in Society waiting in your inbox. Read the Noteworthy in Society newsletter.

--

--