Implicit Bias or Bias in Social Psychology?

Implicit Bias


1. Introduction

This essay will explore a case study that I will argue exemplifies an area of scientific practice being influenced by contextual values. By contextual values I mean, for example, the various biases, interests, and ideological and political values, and so forth, that individual scientists harbor. Specifically, I will be looking at the research program in social psychology that has aimed to develop and empirically test the psychometric instrument known as the ‘Implicit Association Test’ (henceforth referred to as IAT), which purports to uncover the unconscious racial biases of individuals (in addition to other unconscious biases). The primary argument I will try to make is that, for a time, contextual values allowed the IAT to gain acceptance within social psychology despite, I contend, the instrument itself being evidentially impoverished and turning out to be fatally flawed. In addition to affecting the credibility process of scientific practice in relation to the IAT, I’ll also argue that contextual values have affected the discovery process, too, via at least one of the IAT’s primary advocates.[1] From an ethical point of view, the intrusion of contextual values into this area of research can be seen as all the more troubling given the relatively widespread use of the IAT not only by a vast number of individuals, but also by a large number of organizations in the wider society—and all, seemingly, without any protestations by its chief advocates. To put the matter bluntly, it can be seen as a form of ethical malpractice to state or otherwise imply that a psychological instrument does X—in this case, diagnose whether and to what extent an individual harbors implicit racial bias—when, in fact, it cannot do so. Before attempting to make the case for the intrusion of contextual values into this area of research, however, I’ll very briefly survey the IAT and its critical flaws.


2. Assessing the IAT

The IAT’s chief architects are the psychologists Mahzarin Banaji and Anthony Greenwald, who unveiled it in 1998 as a scientific instrument that allegedly could assess the degree to which an individual harbored any of a number of implicit biases (Greenwald, Nosek & Banaji, 2003). Most researchers that have utilized the IAT in their attempt to study implicit bias have tended to focus on racial implicit bias in particular (usually pertaining to blacks and whites). Very generally speaking, the IAT attempts to generate an indicator of one’s supposed implicit bias by assessing the degree to which a test-taker is quicker to associate a positive term, such as ‘good’ or ‘nice’, with pictures of white faces than they are with pictures of black faces; and also the degree to which a test-taker is quicker to associate a negative term, such as ‘bad’ or ‘scary’, with pictures of black faces than they are with pictures of white faces (it is worth noting that in the experimental setup of the IAT, faces and words are displayed simultaneously.) The IAT also makes for an interesting case study given the fact that it is widely recognized among the general public, with over 17-million unique test sessions having been taken at the test’s official website (Project Implicit, 2017). As alluded to, the IAT is designed to measure implicit bias with regard to a number of different targets (i.e., elderly people, racial minorities, overweight individuals), but most of the research focus and public attention has been directed to the racial variant of the test, and, unless stated otherwise, my discussion will be exclusively about the racial version.

To understand why the IAT is manifestly not a good measure of what it purports to measure—namely implicit bias—we need to examine how it fares in relation to two key concepts: reliability and validity (Kalat, 2014, ch. 9).[2] Psychometrically, the concept of reliability, broadly and roughly speaking, refers to the degree to which a given measurement instrument yields similar measurements when administered on separate occasions—for instance, when the test is administered to the same individual, say, two hours apart, four weeks apart, one year apart, and so on. The concept of validity, on the other hand—and again, broadly and roughly speaking—refers to how well the test in question can account for or predict what it alleges to account for or predict. So, if a large number of people are administered a given test instrument on many different occasions, the test is said to be reliable to the extent that individuals’ results are similar on all of the occasions tested, and unreliable to the extent that individuals’ results vary between the occasions tested. And a given test instrument is held to be valid to the extent that it can account for or predict the phenomenon it purports to account for or predict.

We can now ask how the IAT fares in relation to these two critical features. And as it turns out, the IAT performs abysmally. Although there is a rather involved and technical back-and-forth between its proponents and detractors in the technical peer-reviewed literature, we can for our purposes zoom in on some of the key, recent overviews and meta-analytical studies done on both the reliability and validity of the IAT. For instance, in a review of the literature, two of the authors of which were Banaji and Greenwald, the IAT was reported as having a reliability ranging from .32 to .65 (Lane, Banaji, Nosek & Greenwald, 2007). (In particular, the reliability was .32 when comparing tests performed two weeks apart (which included four tests in total); a reliability of .65 when comparing tests performed twenty-four hours apart (which included two tests); and a reliability of .39 when comparing tests performed within a single testing session (which included two tests)). Bar-Anan and Nosek (2013) also declared an IAT reliability of .40. Now, with these results in view, it is important to ask how to make sense of the values reported—in particular, what they say about the reliability of the IAT itself. Broadly speaking, a test instrument in psychological science is generally accorded as being acceptable if it reaches a reliability of approximately .80 (although this value can vary). And clearly, none of the best and most comprehensive estimates of the IAT reach that threshold. Indeed, there is wide divergence between the highest and lowest values reported, and, rather anomalously, reliability is significantly higher between tests occurring only twenty-four hours apart than when the tests are spaced out within the same session.

With regard to validity, Greenwald, Poehlman, Uhlmann, and Banaji (2009) reported that IAT scores accounted for approximately 5.5 percent of the discriminatory behavior measured in the lab. A separate meta-analysis disputed this figure, arguing it was an overestimate predicated on a methodological error and should instead be adjusted downward (Oswald, Mitchell, Blanton, Jaccard & Tetlock, 2013). In any case, such a very low figure is especially noteworthy for at least three reasons. Firstly, because it pales in comparison to how powerful its proponents have alleged the IAT is as a predictor of discriminatory behavior on the whole. Secondly, because even being able to find any correlations between IAT scores on the one hand, and discriminatory behavior within a lab setting on the other, still leaves it an open question as to whether such correlations can account for any discriminatory behavior in real-world contexts. And thirdly, because, as matter of rudimentary statistical reasoning, it still remains the case, even within those lab settings, that correlations do not necessarily entail causal processes (as the correlations could potentially be accounted for by other variables once they are identified and controlled for). Just as noteworthy, however, even the main advocates of the IAT have recently conceded that it is not reliable as a measure of an individual’s implicit bias, and hence shouldn’t be used as such (Greenwald, Banaji & Nosek, 2015).


3. A Case for Political Bias

This last concession turns out to be of importance. For, despite the terrible psychometric and empirical standing of the IAT, its two chief proponents, Greenwald and Banaji, still appear to persist in advocating for its utility. As an elementary matter of scientific ethics, it is troubling that the IAT, despite its inability to indicate anything about an individual’s proclivity to behave in a racially discriminatory manner, is still nonetheless available for public use at the Project Implicit website. It stands to reason, then, that test-takers might be lead to believe that the test results do indicate something about their unconscious capacity to act in biased, discriminatory ways toward black people, for example (including, even, black people interacting with other black people). One way of attempting to assess whether the choice by the researchers to leave the IAT accessible to the public is a choice contaminated by contextual values is to evaluate it in light of a thought-experiment proposed by Tetlock (1994) (who incidentally is one of the IAT’s most active critics), namely the ‘turnabout test’. For our purposes, we can use the turnabout test to ask the following question: Would it be acceptable by the community of psychologists if a psychometric instrument that was widely viewed as concordant with and advancing a politically conservative agenda, after having been shown to be as badly supported as the IAT, was nonetheless still accessible to the public, widely used by various organizations, and still advocated for by its originators? I will leave it to the reader to attempt to honestly answer this query. However, to help put the thought-experiment in context, one should be made aware of the ideological breakdown of current psychologists. As it turns out, psychologists overwhelmingly report being left-of-center politically—i.e., liberal, progressive, socialist—and there are very few who report as being, say, libertarian, and vanishingly few reporting as politically conservative (although there is some reason to think that the number of libertarians and conservatives is an underestimate, given pressures to not ‘out’ oneself as such) (e.g., Inbar & Lammers, 2012; Duarte et al., 2014). One might plausibly argue that if the field were (counterfactually) dominated by libertarians and conservatives, there might be a tendency to be less worried about empirically unsupported test instruments being administered and widely applied in the way that the IAT apparently is, but where the test instruments in question would be perceived as being friendly to a libertarian or conservative agenda. Assuming this kind of symmetry in the way in which political values might govern this hypothetical tendency to go lightly on test instruments (and theories, etc.) that broadly support or are consistent with the political views of the regnant majority of a field, it may also serve as an argument in favor of aspiring toward incorporating more political and ideological diversity in the field. Of direct relevance to this are arguments advanced by feminist philosophers such as Longino (1990), Harding (1991), and others to the effect that the processes of both scientific discovery and credibility are enhanced substantively, in general, to the extent that a plurality of perspectives are reflected by scientific practitioners. Indeed, Duarte et al. (2014) provide cogent illustrations of this perspective in a recent paper that precisely argues in its favor within social psychological science. Specifically, given the current domination of psychology by those with political stances that are left-of-center, they argue that the discovery and credibility dimensions of scientific practice can benefit greatly if more viewpoint (i.e., political) diversity were incorporated into the discipline.

As applied to the current case study of the IAT, one could plausibly argue that increased political diversity in psychology might very well have caught the troubles with the IAT at a much earlier stage of the credibility process. That is to say, had there been, say, more psychologists with libertarian and conservative political views, the IAT might have been scrutinized more rigorously and critically by journal editors, peer-reviewers, Banaji and Greenwald’s colleagues and students, and so forth. Because it is widely construed as being friendly to and advancing a progressive political agenda, the IAT appears to have been evaluated less stringently during the credibility process by journal editors, peer-reviewers, and others. In any case, we might surmise that the internal error-detecting and error-correcting mechanisms of the scientific credibility process were delayed in being fully brought to bear against the IAT, by dint of the relatively few non-liberals and non-progressives in psychology—although it should be pointed out that seemingly none of the IAT’s critics are self-identified conservatives.

Apart from the credibility process, however, it is evident that the major advocates of the IAT, particularly Banaji and Greenwald, were quite vocal champions of it long before adequate evidence of its reliability or validity had been demonstrated (e.g., Singal, 2017). Now, it is important to be fair and charitable to both Banaji and Greenwald and thus not impugn their motives without good evidence that they were motivated to champion the IAT at least partly for political reasons. Although I will not draw a firm conclusion on whether either one of them or both are motivated to champion the IAT for political reasons, I will at least offer some reasons to think why it may be the case. In order to examine this possibility, I will deploy the argument structure used by the philosopher of science Sesardic (2005), who, in nutshell, attempts to ground claims of politically-motivated scientific errors by simply looking to statements made by the scientists found guilty of such errors.[3] A clear virtue of this approach is that it sidesteps the tricky and morally fraught issue of baselessly accusing people of politically-motivated bias (baseless in the sense of making accusations without evidence). In trying to ascertain whether a scientist has committed an error due to political bias, Sesardic (2005) suggests the strategy of trying to verify the following:

(1) that scientist X who is accused of a mistake did actually commit the mistake, (2) that the mistake is a serious blunder, rather than one of those bona fide errors that are expected to happen sporadically in the course of normal scientific work, (3) that X also had a particular political attitude, and (4) that the mistake was really due to the influence of that political attitude. (p.186)

So far as trying to verify steps (3) and (4), one suggestion offered by Sesardic is to look for clear statements made by a given scientist that betray one (or more) of their political beliefs, and then additionally show that their error was influenced by that belief. Most plainly, this can be revealed by statements indicating that the scientist’s promulgation of an erroneous view is connected in some way to that political belief.

Since we have seen that the IAT is unfounded as an indicator of an individual’s unconscious bias, we can look to a statement made by Banaji after it had become clear that the IAT was unreliable and not a valid predictor of bias, and after she had conceded as much, in order to test whether her scientific position is in any way influenced by her political beliefs. To begin with, we can note that in an interview published in October 2014, she declared that “I was raised to be a progressive” (Philip, 2014). This, I presume, is a plausible yet fallible indicator of her current political orientation as well (but of course, technically speaking, this may be an erroneous inference). In another recent statement made to a journalist, published in January 2017, and in the context of replying to some of her academic critics, she states:

There’s too much interesting stuff to do and too many amazing people doing it for me to justify worrying about a small group of aggrieved individuals who think that Black people have it easy in American society and that the IAT work might make their lives easier. (Singal, 2017)

This clearly indicates that Banaji believes that the IAT might make the lives of black Americans easier—a sentiment that connects with her progressive political orientation. So, given everything up to this point, we can plausibly but fallibly (!) infer the following: that Banaji is someone who is politically progressive; is of the mind that the IAT might make the lives of black Americans easier; publicly championed (along with her colleague Greenwald) the IAT long before adequate evidence of its reliability and validity was in hand; and, even after the IAT has been shown to rest on unsupportable foundations, and even after having personally conceded as much, still advocates for its social utility. From this, we can attempt to infer, again fallibly, that she may have been led to commit a scientific error in a way that was connected to her progressive political beliefs. I should add that in fielding such a case against Banaji, it should in no way be taken to be necessarily impugning her political beliefs, or as impugning the idea that discrimination on the basis of arbitrary characteristics is immoral.

To sum up, this essay has tried to make the case that contextual values have intruded into the scientific discovery and credibility processes in relation to the IAT. With regard to the credibility process, it appears as if the overwhelmingly left-of-center political views of social psychologists have allowed the IAT to be scrutinized less intensely. Quite plausibly this is because the IAT is generally viewed as supporting and advancing a politically progressive agenda. Finally, with regard to the discovery process, a reasonable but fallible case can be made that the IAT was developed, championed, and is still supported by one of its originators, Mahzarin Banaji, at least partly because of her political beliefs, and despite no evidence of its reliability having yet been produced.


[1] See Grinnell (2009) for an exposition of the discovery and credibility processes of scientific practice.

[2] My analysis of the IAT draws from Singal (2017).

[3] In his case, Sesardic (2005) asks how we might detect political motives among scientists, philosophers of science, and others when it comes to debates in behavior genetics, including the one between ‘hereditarians’ and ‘environmentalists’ with regard to racial differences in general cognitive ability.



Bar-Anan, Y., & Nosek, B. (2013). A comparative investigation of seven indirect attitude measures. Behavior Research Methods, 46(3), 668-688.

Duarte, J., Crawford, J., Stern, C., Haidt, J., Jussim, L., & Tetlock, P. (2014). Political Diversity Will Improve Social Psychological Science. Behavioral And Brain Sciences, 1-54.

Greenwald, A., Banaji, M., & Nosek, B. (2015). Statistically small effects of the Implicit Association Test can have societally large effects. Journal Of Personality And Social Psychology, 108(4), 553-561.

Greenwald, A., Nosek, B., & Banaji, M. (2003). Understanding and using the Implicit Association Test: I. An improved scoring algorithm. Journal Of Personality And Social Psychology, 85(2), 197-216.

Greenwald, A., Poehlman, T., Uhlmann, E., & Banaji, M. (2009). Understanding and using the Implicit Association Test: III. Meta-analysis of predictive validity. Journal Of Personality And Social Psychology, 97(1), 17-41.

Grinnell, F. (2009). Everyday practice of science. Oxford: Oxford University Press.

Harding, S. (1991). Whose science? Whose knowledge?. Ithaca, NY: Cornell Univ. Press.

Inbar, Y., & Lammers, J. (2012). Political Diversity in Social and Personality Psychology. Perspectives On Psychological Science, 7(5), 496-503.

Kalat, J. (2014). Introduction to psychology (9th ed.). Belmont, CA: Wadsworth Cengage Learning.

Lane, K. A., Banaji, M. R., Nosek, B. A., & Greenwald, A. G. (2007). Understanding and using the implicit association test: IV. Implicit measures of attitudes, 59-102.

Longino, H. (1990). Science as social knowledge. Princeton, N.J.: Princeton University Press.

Oswald, F., Mitchell, G., Blanton, H., Jaccard, J., & Tetlock, P. (2013). Predicting ethnic and racial discrimination: A meta-analysis of IAT criterion studies. Journal Of Personality And Social Psychology, 105(2), 171-192.

Philip, J. (2014). Mahzarin Banaji – Zooming in on blind spots. Live Mint. Retrieved 12 August 2017, from–Zooming-in-on-blind-spots.html

Project Implicit. (2017). Retrieved 12 August 2017, from

Sesardic, N. (2005). Making sense of heritability. Cambridge: Cambridge University Press.

Singal, J. (2017). Psychology’s Favorite Tool for Measuring Racism Isn’t Up to the Job. Science of Us. Retrieved 12 August 2017, from

Tetlock, P. (1994). Political Psychology or Politicized Psychology: Is the Road to Scientific Hell Paved with Good Moral Intentions?. Political Psychology, 15(3), 509.


Immortality: Duplicating Consciousness Edition

There’s a literature in analytic philosophy on the metaphysics of identity relevant to the following question: What happens when you manage to completely duplicate the information comprising a human’s brain and body? 

Centrally, the question interrogated in that particular literature pertains to whether some particular entity, such as a given man or woman, is or is not the same thing after some change(s) of some sort—i.e., after a certain temporal duration or some other kind of physical alteration, and so forth. For my purposes here, however, that isn’t quite what I’m asking. Rather, my question is specifically about one’s stream of consciousness and what might happen if we were able to effect a duplication of it. I’ll refine the question some more after a few more preliminary remarks.  

At root, I want to briefly pursue what to my mind is the most plausible position on this question and use it to view a particular idea that gets raised in chatter about the so-called ‘singularity’ in futurologist circles (and elsewhere). The idea that I want to examine is the one that alleges that immortality is in principle possible, because it is in principle possible to duplicate the information that underlies one’s consciousness. Indeed, according to many futurologists, our civilization—or one of its offshoots, in whatever form it takes—will likely if not inevitably develop the scientific and technological know-how to perform such immortality-making—that is, to duplicate consciousnesses and hence allow for immortality.

The philosopher Nick Bostrom gives a brief outline of the basic components to ‘uploading’ one’s identity here:

So, if all the information comprising an individual were suddenly duplicated perfectly in some physical medium that could completely capture their dynamical informational pattern, what would happen to their stream of consciousness?

Would it also duplicate into a second, (instantly) diverging stream? And if so, does that mean that all streams of consciousness have a unique subjectivity that is intrinsically connected to the unique spatiotemporal patterns and unique matter and energy that actually instantiate those streams of conscious subjectivity? If so, then seemingly all instantiations of subjectivity are unique existences—unique ‘points of view‘ (to borrow Thomas Nagel’s concept) that cannot be duplicated. True, a perfect duplication of all of the information that physically instantiates a given person’s consciousness would be a perfect duplication of virtually everything about them, such as their memories, personality dispositions, and even their phenomenology. Yet, it seems as if the duplicated consciousness’ ‘point of view’—which is a feature seemingly inherent to conscious experience in general—would not feel as if it was continuous with the stream of consciousness from which it was duplicated. On this view, in other words, if your consciousness was perfectly duplicated in some other physical entity, such as in a cyborg of some sort, then although the cyborg might in virtually every way be virtually identical to ‘you’, its conscious experience would not feel as if it was continuous with ‘yours’.

The foregoing take seems to suggest the following upshot: That the complete preservation of psychological continuity is not sufficient to sustain the same unitary conscious subjectivity—the very same ‘point of view’ of one’s particular conscious experience. Hence, the above view seems to suggest that the particular physical pattern spatiotemporally instantiating the information subserving one’s consciously-experienced subjectivity is necessary and sufficient to sustain the feeling of being the same unitary consciousness over time.

It also seems to be complemented by the view that the integrity of a specific kind of functional organization seems to be the necessary and sufficient condition for instantiating the living substrate upon which conscious subjectivity crucially depends, with the corollary being that a certain amount of entropy can define the threshold at which that functional organization can no longer sustain the life processes of an agent, and, ergo, its consciousness (for when life-sustaining processes cease, so too is consciousness likewise extinguished). This point about entropy’s relation to life-sustaining processes seems to underscore the emphasis on the particular physical pattern that ultimately subserves the particular point-of-view of conscious experience that feels like it persists as one and the same point-of-view over time. Accordingly, it emphasizes that, insofar as one wants to sustain the feeling of being the same conscious subjectivity over time, then one must focus directly on the exact dynamics of the spatiotemporal trajectory of the matter and energy that constitutes that particular conscious subjectivity.

If our analysis is correct, it seems to put the kibosh on one day successfully uploading or copying one’s consciousness and living forever, in the sense that one’s consciousness would feel like it was persisting as the exact same conscious subjectivity, only now as the uploaded information pattern. And, I suspect, this is the critical feature that most people would care most deeply about so far as the prospect of duplicating consciousness and identity is concerned.

The view I’ve been putting forth is at loggerheads with the scenario depicted in the film Transcendence (featured in the clip at the outset of this essay), where Johnny Depp’s character, Dr. Will Caster, is seemingly the same unitary subjective consciousness after being informationally uploaded as he was prior to it while still ‘in’ his physical body. Stanisław Lem’s science fiction novel Solaris, on the other hand, seems to come close if not converge upon the view that duplication is not tantamount to feeling as if one is the same subjectivity as the entity from which one was duplicated from. In this scene from Steven Soderbergh’s 2002 film version of the novel, Rheya (played by Natascha McElhone) is created as a physical clone of the deceased Rheya by the mysterious Solaris planet and asserts that although she has various memories (of her deceased incarnation), she has no feeling of ever “being there” and “experiencing those things”. This, I think, is very similar to the idea of not being the same subjective, conscious point of view after duplication as before duplication—that is to say, both instances, both before and after duplication, are entirely distinct streams of consciousness, two entirely distinct points of view. (Relevant time slice: 30:40 – 37:20.)

Do Poverty, Inequality, and Gun Prevalence Cause Crime?

Black Gang

Two bromides in discussions of crime are that poverty and inequality are important contributors. Thankfully, we can put aside speculative chatter and explore the issue in quantitative and scientific terms. Most of what I say on this issue will draw heavily from the fantastic dissection of it by The Alternative Hypothesis—Sean Last, in particular—so I extend one big hat tip in that direction (and I’d highly recommend bookmarking that site in general, as it is truly fantastic—although certainly not politically correct!).

First off, we can establish that poorer folks, relative to richer ones, are indeed more likely to be criminals. However, let’s look at the association specifically between poverty and inequality on the one hand, and crime on the other. The results of a large meta-analysis of 45 studies from the relevant peer-reviewed literature by Vieraitis can be seen in the charts below:

2 (5)


Results of an even larger meta-analysis on the associations between crime on the one hand, and median income, poverty, income inequality, and unemployment rates on the other, by Ellis, Beaver, and Wright can be seen below:


Chiricos’ meta-analysis of 288 studies looking at the correlations between crime and unemployment is depicted in this chart:


Now, let’s take a look at Pratt and Cullen’s meta-analysis of the important issue of effect size. In 153 studies, they found a mean effect size of poverty of .253, with 59% of all results being statistically significant. In 167 studies, the mean effect size of inequality on crime was .207, with 55% of the results being statistically significant. And in 204 studies, the mean effect size of unemployment on crime was .135, with 44% of those results being statistically significant. Nivette also performed a meta-analysis of 37 studies, finding the mean effect size of national wealth on crime to be -.055 (note the minus sign), and not statistically significant; the mean effect size of income inequality on crime to be between .224 to .416 (contingent on the manner in which income inequality was measured), with both values reaching statistical significance; and the mean effect size of unemployment on crime being .043, and not statistically significant (but note that this particular mean effect size was gathered from only four studies).

At this point, we would be remiss not to at least mention Hsieh and Pugh’s meta-analysis of 34 studies. Those researchers found positive correlations between poverty and violent crime (save for 2 of the 78 relationships examined), with an average correlation of .44. Oddly, they also found the same percentage of studies showing a correlation between inequality and crime to be the same—namely, .44. It is difficult to know what to make of their meta-analysis, but the larger, more recent ones that we surveyed above present a very different picture. And Hsieh and Pugh themselves note the inconsistent findings in literature reviews on the relation between economic variables and crime performed in the late 70s and early 80s. In any case, in view of the other meta-analyses, it stands to reason that Hsieh and Pugh’s meta-analysis is for some unknown reason(s) an aberration.

Before we proceed further, note that, conventionally speaking, .2 is considered the threshold for weak effects, .5 for moderate ones, and .8 for strong effects. Given this, the striking thing to observe among the results surveyed above is just how many of the mean effect sizes are weak or insignificant.

Moving on, let’s look at these relevant variables (poverty, income inequality, and unemployment) plotted across time, and in relation to crime.

Violent Crime and Poverty

Property Crime and Poverty

(Poverty rates drawn from the United States Census Bureau; crime rates drawn from the Department of Justice’s Uniform Crime Reporting Program.)


Surprisingly, poverty and crime are actually negatively correlated longitudinally. In other words—and as counterintuitive as it will sound to many—as poverty rose, crime declined, and as poverty went down, the crime rate went up. So, for example, crime went down during the Great Depression—which is not what one would have predicted according to the thesis that poverty causes crime.

An analysis of 8 studies by Ellis, Beaver, and Wright also found inconsistent longitudinal relationships between the economy and unemployment on the one hand, and crime on the other:



Of 35 studies finding a longitudinal correlation between income inequality and crime, Rufrancos and colleagues found only 60% to be both positive and statistically significant.

The longitudinal data plotted in the following graphs also appears to be at loggerheads with the hypothesis that income inequality is causally implicated in crime.



(Data in the murder graph is taken from the FBI, the Office for National Statistics (UK), and the UN; data in the wages graph is taken from the US Bureau of Labor Statistics.)

As we can see, the crime rate in the US has ostensibly been moving downwards, while income inequality in the US has been rising over the same period of time. So the trend line for crime runs in the opposite direction predicted by the inequality-causes-crime thesis (given that income inequality has been widening over the same span of time).

Overall, the foregoing isn’t the picture one might expect to see if there were actually a strong and robust causal connection between these economic variables and crime. Indeed, even in so far as there are any correlations between crime and the various economic variables we have been considering, all of the usual caveats are in play, including the direction of causality. That is to say, even if, for argument’s sake, we were to grant a strong, robust correlation between crime and all or any of the economic variables we’ve considered, it would still be an open question as to whether any given economic variable in question was causally related to crime, or whether crime was casually related to the economic variable in question. In other words, in asking about the direction of causality here, we are effectively asking, ‘Does poverty/inequality/unemployment (at least partly) cause crime, or does crime (at least partly) cause poverty/inequality/unemployment?’ In sum, trying to ascertain the direction of causality, even on the assumption that there is some sort of causal connection, is quite tricky.

Now, perhaps we should remind ourselves that, given the weight of the evidence, the effect sizes of various economic variables on cross-sectional studies of criminality do not appear to be very large in the first place. And, as per usual, there is absolutely no guarantee that those effect sizes would not narrow or even disappear completely after controlling for various other variables—particularly the sorts that criminologists do not typically control for, such as IQ, or even molecular-genetic differences (i.e., allelic variants). (For instance, IQ is indeed associated with criminality—see here and here.)

However, we can attempt to zoom in on the question in a more fine-grained manner by considering one ingenious study that was conducted recently by Amir Sariaslan and colleagues, and which demonstrates how careful study design and controlling for the right variables can move us in the direction of ascertaining causality:

“In 2014 came the final nail in the coffin to the “poverty causes crime” thesis. A Swedish study conducted by Amir Sariaslan was published which—for the first time—tested directly whether growing up in poverty directly contributes to crime, or whether there are other factors about the kinds of families which tend to end up poor which also cause them to breed crime. What made Sariaslan’s study uniquely insightful was the decision to take families which rose out of poverty, and compare the lives of children born and raised within those families before their rise from poverty with the lives of children born and raised within those same families after their rise from poverty. The conclusion his research came to? “There were no associations between childhood family income and subsequent violent criminality and substance misuse once we had adjusted for unobserved familial risk factors.” Sariaslan’s study, in other words, had proven that growing up in poverty is not what creates one’s adult likelihood of committing violent crimes. Children who grow up in previously–poor families have exactly the same likelihood of committing crimes as children who actually grow up poor. The only conclusion we can soundly come to is that something else about poor families other than poverty itself must explain why their children go on to commit crimes.” [Source]

Such a finding seems to very tellingly count against the hypothesis that poverty is causally implicated in criminality. Assuming such a finding is robust, it is, needless to say, very revealing.

Relatedly, it might be alleged that various features of socioeconomically deprived neighborhoods are causally implicated in crime. Another study by Sariaslan and colleagues addressed this question, and their results interestingly did not bear out this hypothesis. Specifically, the researchers found that any association between neighborhood deprivation-related variables on the one hand, and crime (but also substance misuse) on the other, disappeared once both observed and unobserved familial confounders were controlled for. It is worth quoting their main findings at length:

“General neighbourhood effects are presented in Table 3. The crude models suggested that 12.2% and 4.2% of the variance in violent criminality and substance misuse, respectively, were attributable to the neighbourhood context. The adjusted models markedly reduced these effects to 1.8% and 1.9%, respectively, indicating that substantial proportions of the attributed variances came from characteristics of the individuals living in the neighbourhood contexts rather than from context-specific factors. In stark contrast, the family context proved to be highly influential, accounting for 30.1% and 22.8%, respectively, of the variances in violent criminality and substance abuse in the adjusted models.

The measure of neighbourhood deprivation was associated with the outcomes of both violent criminality and substance misuse in the total population sample (Table 4). An increase of 1 SD in the neighbourhood-deprivation score was associated with a 57% increase in the odds of being convicted of a violent offence. When we adjusted for observed confounders, the association was considerably attenuated (OR: 1.09; 95% CI: 1.06-1.12). In the final step, we adjusted for unobserved confounders within nuclear families and the association disappeared (OR 0.96; 95% CI 0.83–1.11). To obtain converging evidence about the validity of our results, we additionally studied the association within extended families among biological full cousins (N=169 254), and found that the results remained intact (OR 1.03; 95% CI 0.93–1.13).

An increase of 1 SD in the neighbourhood-deprivation score was associated with a 31% increase in the odds of engaging in substance misuse. The association disappeared, however, in the adjusted model (OR 0.98; 95% CI 0.96–1.01), indicating that the effect of the contextual exposure was confounded by family-level SES.”


In America, the discourse on gun control tends to be highly polarized and politicized. One idea that gets tossed around is the notion that gun homicides are more likely to occur to the extent that there are more guns around.  And relatively few who opine on the matter, journalists and pundits included, seem to discuss it in quantitative and scientific terms. (Of course, this is not too surprising, given how seemingly rare quantitatively- and scientifically-informed discussions generally are in our society.)

First, let’s consider the oft-repeated assertion that local gun prevalence is somehow causally related to gun homicides. As it turns out, this is often a charge made by gun control proponents—roughly, the idea is that the more guns there are in a locale (i.e., at the municipal and state levels), the higher the gun homicide rate will be. Some will attempt to make a case for this hypothesis by marshalling data in its support, but without statistically controlling for gun suicides. Obviously, this is wrongheaded. However, looking at the data specifically on gun ownership rates and gun homicide rates, there does not even appear to be a correlation between the two variables.

Gun Ownership and Homicide

(The data in this graph are taken from the Centers for Disease Control, both for state gun ownership rate and homicide rate—averaged across 3 years (2001, 2002, 2004); source.)

So, since there’s no correlation between the (state-level) gun ownership rate and the gun homicide rate, there does not appear to be a case for the causal connection between the two variables (as again, logically, there would minimally need to be a correlation for there to be a causal connection, realistically speaking).

Now, let’s zoom in a bit more and ask the following question. Is there any association between the state-level homicide rate and the state-level gun homicide rate? Well, the data can answer this question, too—and it appears that, Yes, there is a correlation, as the graph below shows.

Gun and Non-Gun Homicide

(Data in this graph is taken from the Centers for Disease Control; source)

Since the gun homicide rate and the non-gun homicide rate are correlated, it suggests that there’s an underlying variableor set of variablescausing both. And this is where things start to get interesting.

Overall, the usual suspects—poverty, income inequality, unemployment—do not appear to be what we should be looking at if we wish to uncover the causes of crime (or, to the extent that any of them are causally implicated, they are relatively marginal forces). And gun prevalence does not appear to be even correlated with homicide. What does cause crime, however, is, in my view, a more complicated and interesting question. Incidentally, I think it’s a question that can only be adequately answered with an explanatory framework whose core is decidedly evolutionary-psychological and genetic in its foundation. But that’s an extended excursion for another time.

Black Lives Matter Toronto Debunked (with Stats, Empiricism, and Logic)


Black Lives Matter has a Canadian offshoot in Toronto—a.k.a. BLMTO. And recently the group initiated a multi-day protest outside of Toronto Police headquarters, in light of the decision by the Special Investigation Unit (SIU) not to press charges against the officer who shot and killed 45-year-old Andrew Loku, a black man. (The SIU is the civilian oversight agency responsible for investigating circumstances involving police that have resulted in a death, serious injury, or allegations of sexual assault of a civilian in Ontario, Canada.) The protest eventually culminated in a march to Queen’s Park (where Ontario’s provincial government is housed), whereupon arrival Ontario Premier Kathleen Wynne, along with other members of her government, came outside to meet BLMTO leaders. During the outdoor rendezvous, the Premier was clear about wanting to schedule a meeting with the group’s leaders.

As can be seen by the official statement from the SIU, it is clear that the decision not to press charges against the officer was the right one. The circumstances of Loku’s death are often framed in terms of careless, unlawful use of lethal police force. I believe that those who frame the unfortunate event in these terms are succumbing to the hindsight bias and omitting or distorting crucial details, however. Many observers and commenters fail to describe the event in its proper context, which makes it easy to overlook the fact that police were called to assist two individuals who were being threatened by the hammer-wielding Loku, and, upon arriving, were in tight quarters—specifically, situated in a hallway with their backs facing a door leading to a stairwell. The officer that shot Loku was in these tight quarters when Loku suddenly started approaching with the hammer, and, despite the officer’s calls for him to stop, Loku defiantly continued closing in while even taunting the officer. BLMTO and its supporters should honestly ask themselves how they would have reacted in such circumstances, time pressured as they were—would they really have risked getting bashed in the head with a hammer, or would they have shot to kill? It’s easy to play Monday morning quarterback when one has a distorted picture of the event and the luxury of distance. To be clear, Loku’s death was unfortunate. But an unfortunate outcome, in and of itself, is not necessarily tantamount to foul play. And yet, many have made the inference from unfortunate outcome to unlawful use of lethal force on the part of the officer in question—but it’s an inference that can only be made by ignoring or distorting the actual circumstances of the event, or by imposing a naïve, unrealistic standard of culpability on the officer (and officers in similar circumstances).

As a side note, one can’t help but wonder how many police shootings of civilians could have been prevented had individuals suffering from mental health problems been housed in institutions where they can be cared for—but that’s a whole other complicated issue that I won’t deal with here. After all, as the CBC reports, 40% of those shot and killed by police are people experiencing mental health crises. And to be fair to police forces across North America, there has been a growing movement to incorporate training and institute policies that would reduce the number of unfortunate shootings of those with mental health problems—and indeed Toronto Police are following suit on this pressing issue. Furthermore, and as will later become clearer when we discuss the cognitive biases of relevance, the media is only wont to report on and sensationalize cases like the Loku shooting. What is not reported on is the vast majority of interactions between police and individuals with mental health issues that get resolved peacefully and with care.

In view of the foregoing, however, what, precisely, are BLMTO’s demands, and, just as importantly, how reasonable are those very demands?

BLMTO have recently provided a list of specific demands. I will address them in seriatim.

Demand number 1: Release of the name(s) of the officer(s) who killed Andrew Loku.

The first demand is flat out negligent. There is a good reason why the SIU does not release the names of officers they investigate. To think that releasing the name of the officer in the Andrew Loku case would not have an adverse effect on their life and safety is to strain credulity. The anonymity provided by the SIU is designed to foreclose the possibility of ‘vigilante justice’ by lunatics who think that the only just outcome of investigations is reputational damage, physical harm, or, worse, death to the officer(s).

Demand number 2: Charges laid against the officers who killed Loku.

This demand is comical and screams of politicization, not justice. Mirroring the larger BLM movement in the United States, this BLMTO demand is tantamount to elevating the group as judge, jury, and executioner. As the high profile case of Michael Brown showed, BLM and its militant supporters felt they already had the correct verdict in advance of due process. And since the legal system issued the correct verdict—Officer Darren Wilson was acquitted of charges—the results were predictable (and similarly predictable was the Freddie Gray ‘protest’, also stoked by BLM rhetoric). Anyone with an inkling of how vitally important due process is to a civilized society ought to see through the mob authoritarianism of a segment of the BLM movement, if not also its core. (Protest in wake of the Ghomeshi trial verdict serves as another recent Canadian example of protesters who are openly hostile to due process and rule of law. For people like this, the only verdict that is acceptable is the one that they have decided on beforehand—the one that aligns with their self-serving ideology.)

(For current purposes, I will be skipping another, similar demand of BLMTO: Immediate release of the name(s) of the officer who killed Alex Wettlaufer, and charges to be laid accordingly.)

Demand number 3: Public release of any video footage from the apartment complex where Loku died.

This demand is unacceptable for the same reason why demand number 1 is unrealistic and negligent: Video identification of the officer who shot Loku would allow would-be vigilantes to take matters into their own hands and place said officer’s safety and well-being in jeopardy.

Demand number 4: Adoption of the African Canadian Legal Clinic’s demand for a coroner’s inquest into Loku’s death.

This demand sounds prima facie reasonable, except that the specification that the coroner be specifically the “African Canadian Legal Clinic” should, to any rational and realistic person, smack of a conflict of interest—or, at the very least, pose doubt about the impartiality of such a coroner (who probably is in some manner simpatico with the sentiments, if not the overt demands, of BLMTO). All such a coroner would need to do to create discord and sow serious doubt about the findings of the SIU decision would be to offer a conclusion that was at loggerheads with the official SIU one.

Demand number 5: Overhaul of the Special Investigations Unit in consultation with families of victims of police violence, black community and community at large.

This demand sounds similarly reasonable, prima facie, but is troubling nonetheless. It’s quite reasonable to wonder if such an “overhaul” would result in measures that would tend to view officers with greater suspicion—by default, as it were—than would otherwise be warranted by the details of specific cases. We should balk at the demand of BLMTO to influence the SIU the same way we would balk at any other interest group making a similar demand. A much more sensible request would be for a comprehensive review of the SIU—something that should probably be performed occasionally, anyway. A comprehensive review of the SIU, with a view to ensuring that its investigative procedures are just and as free from bias as humanly possible, is the best we can really ask for. Allowing interest groups to influence reviews of the SIU and other departments runs the risk of installing artificially-imposed ‘quotas’ and biased procedures. For instance, would it really be just to ensure that only a certain number of lethal police shooting cases involving non-whites resulted in the officer not being charged? In other words, should it be legislated that a certain proportion of officers who have shot and killed non-whites be charged, no matter the details of individual cases? This would, of course, be a ludicrous affront to justice. But with BLMTO, you never know.

The very idea that an overrepresentation of blacks among those killed by police does not necessarily entail that SIU investigations are biased is, for BLMTO and their supporters, unthinkable and ‘racist’. In reality, this is an elementary point that should be straightforward, but for many it is considered controversial if not scandalous.

Interestingly, one of the co-founders of BLMTO, Yusra Khogali, wrongly referred to the SIU as “Toronto police’s Special Investigations Unit” (my emphasis). As I already noted at the outset—and as the SIU’s own website makes eminently clear—“The SIU is a civilian law enforcement agency, independent of the police, that conducts criminal investigations into circumstances involving police and civilians that have resulted in serious injury, death or allegations of sexual assault” (emphasis mine). Indeed, the SIU was created in the wake of past protests alleging (correctly) that internal police investigations might be biased in favor of cops. And since we’re talking about Yusra Khogali, it is indicative of the times we live in when a leader of BLMTO is able to say some rather rotten and bizzare things yet have it discredit neither her nor the organization. Had, say, a white male said such things, mutatis mutandis, it is fairly safe to say that it would have come with high personal and professional costs to said white male, and the movement of which they were a part might never have recovered—and it’s hard to see how Toronto city council, the mayor, the premier, and others in government and the media would have continued to take them seriously, much less agree to meet with them. But, hey, double standards are perfectly fine if you’re a member of the right ‘victim group’, as per the narrative.

Demand number 6: Commitment to eliminating carding, including deleting all previously recorded data, reframed regulations, consistent implementation of policy among various police boards, and concrete disciplinary measures for officers who continue to card.

This, in my view, is the one reasonable demand that BLMTO has made. I will consider this further later in this piece. And it should be pointed out that there are some other legitimate grievances that can be pressed against the Toronto Police. But even here, careful attention to the data counsels against embellishing the scale of the problems. For instance, a recent analysis of Toronto Police data by the Toronto Star in 2010 revealed some racial disparities when it comes to simple drug possession and bail rates, in that “blacks were released at scene (Form 9) 58.3 % of the time; whites 64.5 %. As for bail, blacks [were] held 14.3% of the time; whites 10.2. In other words, whites were 1.1 times more likely to be released at the scene, and blacks 1.5 times more likely to be held for bail”. Importantly, however, one should note that this gap has narrowed since the Toronto Star’s last analysis, performed back in 2002, and the gap, both now and then, is very modest, as the numbers above show.

But let’s now look closely at the data on fatal shootings involving police, so as to put the BLMTO narrative under scrutiny—which the mainstream media has by and large failed to do.

Since 1990, there have been 51 fatal shootings involving police (excluding suicides where police were present). At least 18 of these shootings have involved black men, which amounts to 35% of all fatal shootings involving police since 1990. And since Toronto’s black population is approximately 9%, this means that blacks are overrepresented in fatal shootings involving police as a proportion of their population. Additionally, in 17 of the 51 fatal shootings, The Toronto Star was unable to identify with certainty the racial background of the individual killed. So, it is possible that the number of fatal shootings of blacks involving police is higher than 18.

As reported by The Toronto Star, an independent review of SIU data showed that, of fatal shootings involving police in the Toronto Census Metropolitan Area between 2000 and 2006, “eight of 12 shootings during that time involved black people — representing 66 per cent of Toronto police shootings during that time period, though black residents represented only 6.7 per cent of the population”.

Now, it is crucially important to underscore that a raw statistical measure of a given outcome, in and of itself, cannot tell us anything about whether that given outcome is unjust, or caused by ‘racism’, etc. There are multiple possible causal explanations for any given measured outcome, and no possible explanation should be treated as a default explanation. Hypotheses need to be tested properly and evaluated comparatively (and not all possible hypotheses of a focal phenomenon will be mutually exclusive—e.g., sometimes one hypothesis will explain a part of the variance, while others will explain the rest). Indeed, the same analyst who conducted the independent review of the SIU just mentioned noted that, according to the data (between 2000 and 2006), police departments in Ontario use force infrequently, and they use it much less than the average American police department. As the president of the Toronto Police Association, Mike McCormack, correctly noted, racial data with respect to fatal police shootings should ultimately be viewed in the context of the circumstances in which such shootings occur, including the percentage of fatal shootings that occur in response to emergency calls, and particularly whether the individual shot was wielding a weapon, and, if so, the manner in which it was wielded.

More fundamentally, the human mind did not evolve to deal particularly well with the kind of number crunching that is required to make rational and objective sense of such social phenomena. Well-documented cognitive biases in psychology attest to this. Of most relevance for our purposes is the availability heuristic, as well as the closely-related phenomenon of availability cascades (discovery of the availability heuristic won Danny Kahneman a Nobel Prize (awarded after the death of his colleague, Amos Tversky)). Stated simply, the availability heuristic is the cognitive bias that makes things that are more memorable and easily retrievable from memory appear much more frequent and important than they really are. For example, as Wikipedia puts it:

“when asked to rate the probability of a variety of causes of death, people tend to rate ‘newsworthy’ events as more likely because they can more readily recall an example from memory. Moreover, unusual and vivid events like homicides, shark attacks, or lightning are more often reported in mass media than common and un-sensational causes of death like common diseases”.

The related phenomenon of availability cascades occurs when, for instance, mainstream media are led to increasingly report on sensationalized cases. Availability cascades are fueled via positive feedback: an initial news report of some event—or set of events—sparks a catalytic reaction of further reporting on that event and events like it, which in turn shapes how common the wider populace perceives the event type to be. In their lengthy discussion of availability cascades, Cass Sunstein and Timur Kuran explain how such cascades can have a distorting and chilling effect on public discourse. Because mainstream media-fueled availability cascades can potentially change the beliefs of many individuals, they can thereby change the costs associated with expressing contrary beliefs. For instance, someone with a belief at variance with the dominant narrative created by an availability cascade might opt to keep silent, on pain of suffering personal and professional costs for airing their disagreement (or even doubt). In addition, ‘entrepreneurs’ of availability cascades (i.e., activists) can self-servingly harness the power of cascades if they can manage to convince journalists and politicians to speak and write about events connected with the agendas that said availability entrepreneurs are trying to further.

BLMTO’s agitation can be viewed as having successfully initiated an availability cascade, given the sustained media coverage of the Andrew Loku case. Coupled with the media frenzy surrounding the shooting of Sammy Yatim, it is quite possible that such availability cascades have led to an increase in anti-cop sentiments among citizens of Toronto (and the wider region) (see, for instance, the relevant poll below). And, incidentally, such availability cascades might have further caused those with differing perspectives to feel less inclined to air them, lest they run the risk of being publicly accused as ‘insensitive’, ‘racist’, and so forth. So these cascades, in changing the broad cost-benefit dynamics of opinion broadcast, could very well have had a negative impact on public discourse—namely, by tamping down on critical and informed discussion of the relevant issues, allowing one dominant perspective to go largely unchallenged and unexamined. Just as importantly, if not more so, in changing public discourse, availability cascades can lead politicians to further (or at least pay lip service to) some agenda—even, crucially, when they have private doubts or disagreements with that agenda.

Placed in a larger context, and in view of the availability bias, it is striking how distorted perceptions can be. BLMTO, via the mainstream media, has managed to convince many—including themselves—that an epidemic of illegal police shootings of black men is self-evident, when, using the aforementioned conservative estimate, only 18 black men in Toronto have been killed by police in the last quarter century. Moreover, keep in mind that, without looking at the details of each case, it is unknown how many of those killings were justified or unjustified. Granted, (and this should go without saying) any loss of life is tragic—but looking directly at the numbers gives perspective, keeps our emotions in check, and allows for a detached, objective appraisal of magnitude. Furthermore, why do BLM, and its Canadian offshoot, BLMTO, avoid highlighting and discussing black-on-black homicide, which kills many more blacks? To put this relatively miniscule number of fatalities in perspective, consider, for example, that among men in Toronto, 198 perished from falls in 2010 alone. In other words, almost four times as many men in Toronto died from fatal falls in a single year (2010) than did people die from gun-related incidents involving police in Toronto in the last quarter century.

It should also be noted that although a number of individuals on the political left in Toronto and the province of Ontario have been calling for the collection of racial data in the realm of policing and crime, calls from those on the left were ironically the impetus for eliminating their collection in the first place. (Perhaps many on the left these days are unaware of how such data shakes out by race. Anyone familiar with racial crime data knows that such data is neither politically correct nor particularly flattering for blacks.)

Because of the power of the mainstream media to distort our thinking regarding the prevalence and interpretation of various phenomena, it has the capacity to give rise to attitudes and policies that ultimately cause more harm than good. As Cass Sunstein and Timur Kuran point out, and in the context of human cognitive biases:

“In cases where the costs are not as clear, the content of media coverage may have major consequences for people’s understanding, by determining the relative availability of both the relevant data and their interpretation. Insofar as the availability heuristic shapes people’s interpretations and desired policy responses, the media may lead people to exaggerate the dangers of the situation at hand, convince them that its elimination should receive priority, and make them discount the inconveniences that would accompany an elimination attempt.”

(On the other hand, it’s also worth mentioning that Sunstein and Kuran also state that “the opposite effects are possible too; through neglect the media may breed ignorance about a genuine danger, thus dampening the demand for action”.)

Has the mainstream media in Toronto shaped the public’s perception of both police and Black Lives Matter Toronto, and, if so, to what extent? This is tough to say with any certainty, as per usual, but a recent poll is at least consistent with the hypothesis that the mainstream media in the city has influenced people’s views on the relevant issues. 55% of residents polled support the BLMTO movement, and 50% believe that there is ‘systemic racism’ in the city (and it appears that the term was not defined when residents were polled, so they were free to associate it with anything at all). Given what we know about cognitive biases, it is certainly not farfetched to think that many Torontonians have very distorted views of the circumstances surrounding Andrew Loku’s death and the details of the SIU investigation (as well as other high-profile cases involving police shootings in the U.S.). Rather than carefully analyze the official report issued by the SIU, much of the media coverage has been presented in a truncated or misleading way, and many commentators have filtered the case through the BLMTO narrative. Add to this other cognitive heuristics that generally make humans cognitive misers—what with mental shortcuts in thought and knee-jerk, emotionally-laden reactions—the recent outrage over the shooting death of Sammy Yatim at the hands of another officer, and you have a toxic brew that gives rise to anti-cop sentiment across the city and larger metro area. In the case of the Sammy Yatim shooting, it is interesting to note how a single, tragic case involving but one unhinged officer can for many people taint the entire police force. What is also interesting, in this regard, is how many will take isolated cases such as this and use it to generalize all cops—an instance of essentializing, which, although a natural human inclination stitched into our mind by natural selection, is nonetheless seen as immoral when the target is other human demographic groups (especially non-whites), but which is hypocritically much more acceptable when the target is groups such as the police. (Parenthetically, note how the amplification of the Sammy Yatim shooting by the media emboldened the more militant activist cells in Toronto.) And as Toronto Police spokesperson Mark Pugash rightly notes, the media does not report on the countless incidents where police have patiently and peacefully dealt with potentially violent individuals, prevented crime, protected the public, or otherwise had cordial interactions with citizens. That is to say, we, the public, do not see all of the good that the police bring.

It may seem counterintuitive, but the desire by many to eliminate police carding may be an instance of good intentions possibly leading to bad outcomes. Again, it is not farfetched to think that the media, in amplifying and in some cases even sensationalizing the narrative of BLMTO, has distorted the thinking on this issue among a large segment of the city populace—or, at the very least, has limited the range of public discourse on this issue by making certain opinions anathema and beyond the pale. For although the city of Toronto, Toronto Police, and its citizens may decide that carding infringes on the rights of its denizens, it is possible that eliminating carding entirely or reducing its occurrence substantially would come at the cost of elevated crime levels. I suggest this not to argue that such a proposition is necessarily true, or that carding is even ethical or lawful, but rather to highlight that this is an empirical question. Suppose, for argument’s sake, that a sufficient level of rigorous investigation into this matter resulted in an awareness (that is, strong evidence) that the elimination of carding, or even just its significant reduction, did, in fact, lead to an increase in crime. Suppose, further, that this increase in crime was especially felt in areas of Toronto with high numbers of blacks. The city, and particularly its black citizens, might then be put in the highly unfortunate situation of having to decide which of two bad arrangements they would prefer:

1) a high amount of carding which ruefully disproportionately affects demographic groups such as blacks (and those categorized by police as ‘brown’), but which reduces crime levels across the city, and especially in areas with high black and non-white populations; or 2) no carding (or a substantial decrease of carding) and a higher crime rate across the city, and particularly in neighborhoods with high black and non-black populations.

Again, I float this scenario as an empirical possibility. It is also possible that carding, done to some unspecified extent, does reduce crime, but whereby the effect is generated not by carding per se, but rather by, say, instilling in would-be criminals a kind of deep, intuitive, gut feeling of being surveilled by police (which could also exert subconscious effects on would-be criminals)—a kind of ‘omnipresent watchful eye’, a la Big Brother or the Panopticon. This alternative hypothesis is far from being outlandish, because we now know, thanks to work in the psychology of religion, that priming individuals with being ‘watched’ by others or by ‘God’, or even just the presence of a smiley face placard, can affect their thoughts, emotions, and behavior, and in a variety of contexts. One finding of this body of scientific research is that such priming increases pro-social behavior (and accordingly decreases cheating, free-riding, and so on).

So, if it should turn out that carding (done to some unspecified extent) reduces crime, and if, further, the effect could be similarly achieved merely through instilling a sense of being surveilled, police might be able to place manned cruisers in areas known for elevated crime rates, and for extended periods of time (perhaps more or less permanently, with cruisers and officers rotating on a shift basis). Alternatively—or in addition to cruisers—police might instill the sense of being watched by installing permanent video cameras in a variety of crucial areas throughout the city. Of course, such strategies also have ethical dimensions, even if they could help combat crime. But in the messy, complex real world, tough decisions and trade-offs are often necessary. And in case one has qualms about this idea, they should ask whether police should pay equal attention to men and women, even though we know that when it comes to violent crime, for instance, men are much more likely to be perpetrators than women.

In fact, there is evidence that proactive policing, such as carding and ‘stop-and-frisk’, are effective in reducing crime. For instance, Weisburd, Wooditch, Weisburd, and Yang’s analysis of stop, question, and frisk practices in New York City found that the practices are a modest deterrent of crime. And as Heather Mac Donald notes, such evidence is in line with longer-term trends:

“[The] reactive style of policing dominated law enforcement until the early 1990s, when the New York Police Department embraced data-driven, proactive policing. The NYPD’s revolutionary new philosophy held that the police could prevent felony crime by reducing low-level lawlessness and intervening in suspicious conduct; that philosophy spread nationwide and ushered in a record-breaking 20-year national crime drop, now at risk in urban areas.”

The BLMTO protests and media sensationalism can also very well lead to more crime. Indeed, this already appears to be the case in the United States, with the backlash against police having led to under-policing, which in turn has led to increased crime. This phenomenon has even been dubbed the ‘Ferguson Effect’, and the latest analysis of the data shows that it is likely to be real. As the ever-sharp and brave Heather Mac Donald has pointed out, the Ferguson Effect is strongest precisely in those cities it would predict as being the most at-risk, namely those “with high black populations, low white populations, and high preexisting rates of violent crime”.

In Chicago, for instance, gun violence has shot up and the arrest rate has dropped since the video release of the Laquan McDonald shooting.





Officers in Chicago and elsewhere, it now appears, are scaling back on their police activity, for fear of being the next cop getting national media attention. It’s even been alleged that the NYPD has intentionally pulled back on their policing activity due to being vilified by (leftist) Mayor Bill de Blasio. In Toronto, categories of crime such as shootings and homicides are up compared to this time the previous year. So it is quite possible that similar dynamics have begun to affect the Toronto Police, if not via top-down influences within the force, then at a more psychological level, with officers, similar to their American counterparts, not wanting to be the next cop getting (quite unwanted) mainstream media attention.

It is probably too much to hope for widespread, clear-eyed, fact-based analysis of the ‘concerns’ of Black Lives Matter and its Toronto offshoot—although brave journalists like Heather Mac Donald have done so valiantly. I close out with a couple of videos that underscore the bankruptcy of the Black Lives Matter movement—trenchant criticisms from Ben Shapiro and Larry Elder that, by my lights, are devastating.

Questions about Militaries, Foreign Policy, and War


In asking critical questions about militaries and their capacities, we might start by framing the discussion in terms of the central insights of neorealism (which of course is the dominant school of thought in academic international relations, sometimes called structural realism—not to be confused with structural realism in the philosophy of science). I’d imagine a standard neorealist reply to anti-militarist stances might be something like the following:

Surely, the neorealist might say, the most general structure of international relations—being, at the highest and most abstract level, characterized by power distributions and ‘anarchy’—calls by necessity for some kind of military capacity. (To be clear, the security dilemma of the international system—that is, its ‘anarchy’ element—is the absence of a ‘Leviathan’ that can police the actions of states and punish aggressors.) The neorealist might further claim that, even if humans are not in some fundamental sense fated to be especially war-prone, the strategic logic embedded in the international system sees to it that those states that pay insufficient attention to it lose out strategically to those more ambitious and militaristic states that embrace a more ‘realpolitik’ attitude (and who are, for instance, assiduous and unapologetic in informing their strategic deliberations with war games simulations and game-theoretic reasoning, etc.).

A neorealist might also point out that her perspective on international relations does not claim that interstate war is inevitable or even necessarily likely, only that the structural realities of the international system force states to play some kind of militaristic hand, particularly for the purposes of deterrence (e.g., in order to contain rivals; forming alliances to counterbalance a regional hegemon, etc.). The neorealist might (?) grant that the parameters of a given state’s required military capacity—and the extent to which it should correspondingly foster or actively promote pro-militaristic attitudes and ideas domestically (to meet personnel demands and galvanize political support, etc.)—are tricky to ascertain. Perhaps, on average, states—or, more pointedly, hegemons, for example—have a tendency to overreach and hence aspire to or maintain military capacities that are more than strategically required. Here, the neorealist might claim that assessing whether extant or desired military capacity is commensurate with bona fide strategic demands (i.e., being neither insufficient nor excessive) is entirely a (rather complicated) empirical matter.

For example, it might turn out that the U.S. is indeed massively overreaching in its current military capacity, in that it far outstrips the requirements for maintaining its relative strategic advantages over competitors; or, alternatively, there might be a real or merely perceived strategic reason for maintaining excessive military capacity, or increasing it further. If, for example, the strategic rationale for excessive military capacity is merely perceived and thus not an accurate assessment of objective strategic concerns, the anti-militarist could use this to undercut the stated need for extant military capacities (or to undercut calls for expansions of military capacities). (I recall reading an op-ed in an international relations magazine recently that essentially was arguing that the desire to oust Syria’s Assad regime was unnecessary—that it didn’t constitute a legitimate threat to U.S. interests or security (and that thinking that doing so was important was akin to a kind of geopolitical paranoia).)

On this latter issue (regarding the question of military capacity and its proportionality or lack thereof with objectively ascertained strategic interests), one could explore American foreign policy in some detail. Pseudoerasmus provides one such exploration in a historical and economic context. Notice that the verdict of that author is that U.S. foreign policy is far from optimal and rational. Perhaps this putative mismatch is shaped, among other things, by unique cultural attitudes of America, plus a raft of foibles, such as errors in judgment, mistakes, short-sightedness, etc. If American foreign policy is as misbegotten as Pseudoerasmus’ analysis suggests, then one would think that pointing these things out in some detail would serve as a solid criticism of it. Citing actual data that show the world to be safer is also a good strategy to undercut the positions of those who are more hawkish. (Somewhat apropos to these issues— although I haven’t read it—Posen’s book appears to be a scholarly attempt at examining the connections between foreign policy and military doctrine.)

One could also potentially compare and contrast American foreign policy with other countries—e.g., Russia, China, and Great Britain. Here, various questions could be asked, such as:

Are there any foreign policy stances that are common to powerful countries, owing to the putative neorealist logic inherent in the international system? This would be analogous to convergent evolution, whereby two traits in different lineages evolve because of similar selection pressures in their respective niches (rather than because of common descent). One of the classic examples of this, of course, is the evolution of flight in bats and birds. Applied to neorealism in international relations, this would mean that states simply respond in similar ways to the same pressures that emerge from the basic architectural structure of the international system.

Insofar as such similarities in foreign policy exist, might they alternatively be explained by mimicry among states (perhaps, for instance, thinking, rightly or wrongly, that it’s an effective strategy, China copies much of the foreign policy stance of America, who copied many of the hegemonic strategies of the British Empire, and so on)?

Or are the fundamentals of a country’s foreign policy conditioned to a substantial extent by idiosyncrasies? These would include foreign policies influenced by factors such as historical and cultural contingencies, biographical and psychological facts about leaders, etc.

Are there blends of these above possibilities, with the exact mix differing from state to state?

On the matter of both violence and war in relation to human nature, Steve Pinker’s book, The Better Angels of Our Nature, is quite germane. In my view, Pinker’s book severely demolishes the idea that humans are violent and warlike by necessity. He develops at length an account of why violence and war have been on the decline throughout human history by trying to identify what the various cultural, social, and psychological forces are that exert these effects. It should be obvious why attempting to understand these forces is of central importance in the ongoing project of reducing violence and war. As they say, the better we can understand these forces, the more effective we’ll be. (Pinker has also updated some of the book’s graphs with data from the last few years.)

(By the way, parenthetical note: A paper Pinker wrote in defense of his book is a revealing look at the kind of ideologues one frequently finds in academe (the philosopher in me makes me say that they’re ‘lacking in epistemic virtue’).

I’m currently inclined to think that militaries are analogous to tools (or, if you like, weapons). How tools (or weapons) are used, and even whether they are used at all, is determined by factors by and large external to the tools themselves (viz., any and all variables that impinge on their utilization). Accordingly, the use, or non-use, of militaries mainly if not entirely flows from—that is, is caused by—phenomena happening within two primary domains (at least). One of these domains would include the nature of the international system (in all its complexity, including the extent to which it mirrors some kind of neorealist construal); the foreign policies of states, especially hegemons; state leaders, especially those with substantial influence on foreign policies (such as presidents and military generals); and mainstream media; etc. The other domain would include public attitudes and opinions surrounding things such as foreign policy and war; who citizens elect to positions of power (in the case of democracies); and grassroots organizations; etc. No doubt there are important reciprocal influences and constraints between these two domains that can be mapped out. And the situation likely gets even more complex once we consider influences that cross national boundaries (as when, for instance, American entertainment agitprop and mainstream media influence political and or cultural attitudes in other countries).

Domestic politics and the larger domestic zeitgeist can also potentially ‘get in the way’ of a state’s ability to act strategically in the international realm. For example, a state’s citizens might not have much of an appetite for aggressive foreign policies (such as the aggressive blood-soaked approach that the US has basically been executing since at least the end of WW II). This probably explains the need to hide the real world manifestations of foreign policy from citizens, or otherwise distort its presentation—all of which, of course, is rather effectively done with mainstream media complicity in places like America. In the case of the U.S. deep state, the smoke and mirrors required to carry out foreign policy in a secretive manner seems by my lights to be an intentional strategy, one that effectively allows for a cordoning off of much of its foreign policy directives. Specifically, this intentional strategy (or so I contend it is intentional) keeps the primary foreign policy directives of states such as the U.S. from being overly influenced by the domestic zeitgeist. All else being equal, we might imagine that states who allow their primary foreign policy directives to be too easily swayed by the ‘whims’ of public opinion are at a disadvantage vis-à-vis states that shield their primary foreign policy directives from such vagaries.

Perhaps the forces that Pinker identifies are (the?) key targets to focus on if we would like to further reduce the incidence of war. Incidentally, Pinker’s book also seems to capture many elements of neorealism’s critics within the academic field of international relations, but without throwing the proverbial baby out with the bathwater; his approach aims to integrate such ideas and develop a coherent explanatory framework.

For instance, aside from changes in norms and other cultural attitudes within societies, Pinker discusses sophisticated statistical analyses of factors that conduce to peace between nations. Such between-nation factors include being stable, mature democracies with free markets and openness to international trade.

Relatedly, at a discussion panel held at Harvard’s Kennedy School of Government, the international relations scholar Stephen Walt (co-author of a certain ‘controversial’ book with John Mearsheimer) raised an interesting challenge: Do the forces that Pinker identifies as having decreased the various forms of interpersonal violence across the span of history contribute to the decline of interstate wars in particular? And if so, how, and to what extent, do these forces percolate up and outward at the level of international relations, making states less willing to engage in wars? Or are there mainly or entirely other forces that have been at work exclusively at the international level, occurring across history (such as the other forces that Pinker discusses)?

Again, maybe militaries per se (and all of their various appurtenances) are symptoms of deeper rooted causes. If the aim is reducing violence, destruction, death, and war, then quite possibly a focus on militaries per se is (at least to an extent) to mistake correlations for causes (to phrase things in statistical/scientific parlance). So far as the role that militaries per se play in their own use, perhaps one of the targets should be to home in on (pardon the military-speak) the very top of the military hierarchy, where top generals and others with influence can be found that exert effects on foreign policy.

Apropos to this point, leaders—and especially those who are involved in making war decisions—have an incentive to become aware of psychological influences on their decision making that are maladaptive. To this end, the findings of researchers such as Aaron Sell, John Tooby, and Leda Cosmides provide important insight:

“[O]ne might hope that the decision to go to war is arrived at rationally, in response to objective conditions. Moreover, it would be delusional in the modern world to think that your personal strength determines—or even influences—how effective your nation’s military will be in a war. Yet our subjects’ strength predicted their attitudes toward military action. This is exactly what one would expect if assessments about the use of coalitional force by the state—an evolutionary anomaly—are generated, at least in part, by mechanisms that evolved for assessing the success of coalitional force by small groups of which one is a member. If governmental decision-makers are like other humans, then their musculature may be playing a role, unconnected from rational evaluation, in their decisions to go to war.”

Of course, there are doubtless other influences on military-related decision making that are maladaptive. So it would be important for such decision makers to be cognizant of them at some level. For example, the psychological literature on decision making in groups—particularly how the compositional features of such groups can shape the internal dynamics of deliberation—also raises concerns in this area. (It is probably wise to take steps to avoid penalizing dissenters in the context of important decisions. Having all of your military generals be very hawkish might also not be very wise. These are just some suggestions to bear in mind.)

A final observation to close with: In the U.S., there’s a certain individual currently running for president on a populist platform that has very much irked the neocons. So far as I can tell, said individual is pro-military, but not an imperial hawk—hence the neocon exasperation. His potential election might be a means for the American public to actually have an opportunity to change U.S. foreign policy in a more dovish direction.