Lotts' "numbers" aren't the problem the NAS and others have with Lott. It's his spurious analysis of and inferences drawn from them they critiqued.
You really should be ashamed of yourself for citing Lott. Not has
that attempts poorly too posit the merit of gun possession/use over gun control measures as a way to reduce gun-related deaths and crime in general.
Lott did publish that book; however, his remarks and conclusions in the book are based upon this study, "
," he conducted two years prior to publishing the book.
In the study, Lott and Mustard analyzed crime statistics in 3,054 counties from 1977 to 1992. The study made the claim that, if all states had adopted “shall issue” laws by 1992, 1,500 murders, 4,000 rapes, 11,000 robberies, and 60,000 aggravated assaults would be avoided annually.
Lott’s research relied heavily on advanced econometrics, a highly specialized sub-field of economics. Using an econometric model with a blistering array of controls, including 36 independent demographic variables, John Lott analyzed a total of 54,000 observations for 3,056 counties over an 18 year period. Such a massive dataset would ostensibly permit Lott, for the first time, to identify the specific effect of concealed carry laws on crime with great precision.
The book was extremely well-received by the pro-gun community, being described as
, and has since sold 100,000 copies. His book inaugurated a new era in the pro-gun movement: for the first time ever, organizations like the National Rifle Association could articulate their advocacy in the language of public health, rather than in constitutional terms.
.
Lambert's and others analysis of Lott's methodology and conclusions identified and include (but are not limited to) material failings in Lott's work:
Lambert:
In an audacious display of cherry-picking, Lott argues that there were “more guns” between 1977 to 1992 by choosing to examine two seemingly arbitrary surveys on gun ownership, and then sloppily applying a formula he devised to correct for survey limitations.
Since 1959, however, there have been at least 86 surveys examining gun ownership, and none of them show any clear trend establishing a rise in gun ownership. [1] Differences between surveys appear to be dependent almost entirely on sampling errors, question wordings, and people’s willingness to answer questions honestly.
Lott replied to this accusation [2] by arguing that, even if there weren’t more households owning guns, there were still more people carrying guns
in public after the passage of shall-issue laws
. However, we know this assertion is factually untenable, based on surveys showing that 5-11% of US adults already carried guns for self-protection
before the implementation of concealed carry laws. It’s extremely unlikely, therefore, for the 1% of the population identified by Lott who obtained concealed carry permits after the passage of “shall-issue” laws to be responsible for
all the crime decrease. Lambert also notes that many of the people who obtained concealed carry permits after the passage of shall-issue laws, were already illegally carrying firearms in the first place. This means, of course, that “shall-issue” laws would produce almost no material changes in the reality of gun ownership.
Lott replied with an ever-weakening series of explanations, suggesting that the 1% of people who obtained permits likely had a higher risk of being involved in crime, and thus disproportionately accounted for the crime decrease. Except, yet again, this statement does not comport with reality. One
study by Hood and Neeley analyzed permit data in Dallas and showed the opposite of Lott’s predictions: zip codes with the highest violent crime before Texas passed its concealed carry law had the smallest number of new permits issued per capita.
Empirical data from Dade county police records, which catalogued arrest and non-arrest incidents for permit holders in a five-year period, also disproves Lott’s point. This data showed unequivocally that defensive gun use by permit holders is extremely rare. In Dade county, for example, there were only 12 incidents of a concealed carry permit owner encountering a criminal, compared with 100,000 violent crimes occurring in that period. That means, at most, getting a permit increases the risk of an armed civilian meeting a criminal by .012 percentage points. This is essentially a round-off error. What’s particularly revealing about this episode is that Lott had to have known about Dade county police records because he cited the exact same study in his book when the records supported a separate position of his. In other words, Lott simply cherry-picked the evidence that supported his conclusion and disregarded the rest.
Even academics on Lott’s side of the argument strongly doubt that concealed carry laws could have such profound effects on crime.
Gary Kleck, a criminologist at Florida State University, for example, writes:
Lott and Mustard argued that their results indicated that the laws caused substantial reductions in violence rates by deterring prospective criminals afraid of encountering an armed victim. This conclusion could be challenged, in light of how modest the intervention was. The 1.3% of the population in places like Florida who obtained permits would represent at best only a slight increase in the share of potential crime victims who carry guns in public places. And if those who got permits were merely legitimating what they were already doing before the new laws, it would mean there was no increase at all in carrying or in actual risks to criminals. One can always speculate that criminals’ perceptions of risk outran reality, but that is all this is -- a speculation. More likely, the declines in crime coinciding with relaxation of carry laws were largely attributable to other factors not controlled in the Lott and Mustard analysis.
David Hemenway from the Harvard School of Public Health, writes a similarly devastating review of “More Guns, Less Crime” in his book,
Private Guns, Public Health. He notes that there are five main ways to determine the appropriateness of a statistical model:
[*]Does it pass the statistical tests designed to determine its accuracy?
[*]Are the results robust (or do small changes in the modeling lead to very different results)?
[*]Do the disaggregate results make sense?
[*]Do results for the control variables make sense? and
[*]Does the model make accurate predictions about the future?” John Lott’s model appears to fail every one of these tests.
Methodological Flaws and Analytical Failings
As
Albert Alschuler explains in “Two Guns, Four Guns, Six Guns, More Guns: Does Arming the Public Reduce Crime,” Lott’s work is filled with bizarre results that are inconsistent with established facts in criminology.
According to Lott’s data, for example, rural areas are more dangerous than cities.
FBI data clearly shows this is not the case. Lott’s model finds that both
increasing unemployment and that
decreasing the number of middle-aged and elderly black women would produce substantial decreases in the homicide rate, conclusions that are so bizarre that they cast doubt on the entire study. Indeed, as Hemenway explains, while middle-aged black women are rarely either victims or perpetrators of homicide, “
according to the results, a decrease of 1 percentage point in the percentage of the population that is black, female, and aged forty to forty-nine is associated with a 59% decrease in homicide (and a 74% increase in rape).”
Lott claims also that there is only a weak deterrent effect on robberies, the most common street crime. As gun violence researcher
Dennis Hennigan writes, “the absence of an effect on robbery does much to destroy the theory that more law-abiding citizens carrying concealed guns in public deter crime.”
Strangely, Lott’s data also shows that, while concealed-carry laws decrease the incidence of murder and rape, it increases the rate of property crimes. Lott explained that this result meant that the criminals were encouraged to switch from predatory crimes to property crimes in order to avoid contact with potentially armed civilians. But
as one researcher skeptically commented, “Does anyone really believe that auto theft is a substitute for rape and murder?”
Another problem with Lott’s methodology is that it does not account for the fact that crime moves in waves. In order to even begin to account for the cyclical nature of crime, one would need to include variables on gangs, drug consumption, community policing, illegal gun carrying, and so on, which are needed to track the peaks and troughs of crime waves. Instead, when John Lott tries to correct for the absence of these variables through time trends, he uses a
linear time trend, which is inappropriate because it incorrectly predicts that increases in crime at one point in time will increase forever.
Despite these glaring methodological concerns, bizarre results, and fundamental empirical problems, Lott’s hypothesis apparently remains popular (
Shrimpbox invoked it, after all). That even as more studies began to shoot more holes in the more guns, less crime hypothesis.
Ted Goertzel, a retired professor of Sociology at Rutgers University, published a paper in The Skeptical Inquirer in 2002, cataloging the most egregious abuses of econometrics in criminology. Unsurprising, John R. Lott’s most significant work,
More Guns, Less Crime, was at the top of the list.
Goertzel shows that Lott’s studies consistently rely on extremely complicated econometric models, often requiring the computational data-crunching power exceeding that of an ordinary desktop computer. That in and of itself isn't problematic. Lott then assumes the rather convenient position of insisting that his critics use the
same data and the
same methods he used to rebuke his claims, even after both his data and methods are repudiated. That he uses that sophistic rebuttal is a failure of reason; thus leaving unanswered (as opposed to unresponded to) the criticisms of his findings. [3]
John Lott’s strategy, then, is the academic equivalent of “
security through obscurity.” [4] In response to criticism, Lott simply papers over his mistakes with even more perilously complex models and tendentious data analysis. This technique allows Lott to preordain a certain conclusion as truth (e.g. “More Guns, Less Crime”), and simply pick the model that produces this result.
Two respected criminal justice researchers, Frank Zimring and Gordon Hawkins, wrote an
article in 1997 in response to this strategy, explaining: “
just as Messrs. Lott and Mustard can, with one model of the determinants of homicide, produce statistical residuals suggesting that ‘shall issue’ laws reduce homicide, we expect that a determined econometrician can produce a treatment of the same historical periods with different models and opposite effects. Econometric modeling is a double-edged sword in its capacity to facilitate statistical findings to warm the hearts of true believers of any stripe.”
Within a year, two econometricians,
Dan Black and Daniel Nagin validated this concern. By altering Lott’s statistical models with a couple of superficial modeling changes, or by re-running Lott’s own methods on a different grouping of the data, they were able to produce entirely different results.
Black and Nagin noticed that there were large variations in state-specific estimates for the effect of “shall-issue” laws on crime. For example, Lott’s findings indicated that right-to-carry laws caused “murders to decline in Florida, but increase in West Virginia. Assaults fall in Maine but increase in Pennsylvania.” In addition, “the magnitudes of the estimates are often implausibly large. The parameter estimates that right-to-carry (RTC) laws increased murders by 105 percent in West Virginia but reduced aggravated assaults by 67 percent in Maine. While one could ascribe the effects to the RTC laws themselves, we doubt that any model of criminal behavior could account for the variation we observe in the signs and magnitudes of these parameters.”
From this, Black and Nagin understood that a single state, for which the data was poorly fitted, must impell the bizarre variations and magnitude in state-specific estimates. They determined Florida was the culprit due to its volatile crime rates in the period under analysis, and regularly changing gun laws. Indeed, Black and Nagin discovered that when Florida was removed from the results there was “no detectable impact of the right-to-carry laws on the rate of murder and rape.” They concluded that “inference based on the Lott and Mustard model is inappropriate, and their results cannot be used responsibly to formulate public policy.”
John Lott’s response to this, in his most recent edition of
More Guns, Less Crime has been to explain that Black and Nagin’s argumentative technique is deceptive, that it was a form of “data mining,” in which the researchers intentionally selected parameters that would break Lott’s model. With no apparent irony, Lott moralized that “traditional statistical tests of significance are based on the assumption that the researcher is not deliberately choosing which results to present.” It should be abundantly clear by now that Lott does not deserve the benefit of this assumption. To be clear: excluding Florida is a legitimate and obvious test of a model’s robustness. The fact that Lott’s results completely depend on the inclusion of a single state means that the model reflects nothing about the actual effect of concealed carry laws on crime.
After the release of the Black and Nagin paper,
Goertzel had a conversation with Lott concerning a fundamental problem in his study, namely: “America’s counties vary tremendously in size and social characteristics. A few large ones, containing major cities, account for a very large percentage of the murders in the United States,” and that, as it would turn out, “none of these very large counties have “shall-issue” gun control laws.”
This means that Lott’s dataset is powerless to answer the very questions for which it was designed. Statistical tests require that there be substantial variation in the causal variable of interest, in this case, there needs to be “shall-issue” laws in places where the most murders occurred. This type of variation is simply absent.
Lott’s response to Goertzel was to shrug him off, insisting that he had enough controls to account for the problem. Fortunately, Zimring and Hawkins identified the same problem, noting that “shall-issue” laws materialized predominantly in the South, the West, and rural regions, areas in which the National Rifle Association was dominant. These states
already had lax restrictions on guns; thus their implementation of “shall-issue” laws cannot be shown to have radically changed the social landscape there. This pre-existing legislative history means that comparing across legislative categories merely confuses regional and demographic differences with the social impact of some legislative intervention. Zimring and Hawking conclude their damning criticism by explaining that, “[Lott’s] models are the ultimate in statistical home cooking in that they are created for this data set by these authors and only tested on the data that will be used in the evaluation of the right-to-carry impacts.”
What criminologists actually believed happened is that the crack epidemic spiked the homicide rate of major eastern cities in the 1980s and early 1990s. Lott’s argument, then, necessarily implies that “shall-issue laws” somehow spared rural and western states from the crack epidemic, while eastern states were not as fortunate. As Goertzel writes, “this would never have been taken seriously if it had not been obscured by a maze of equations.”
University of Arkansas Law Professor Andrew J. McClurg, in a
critical review of Lott entitled, “‘
Lotts’ More Guns and Other Fallacies: Infecting the Gun Control Debate,” concludes that Lott’s entire project is one large example of fallacious post-hoc reasoning. Although Lott concedes that there is danger in inferring causality from specious correlations, he defends himself by insisting: “[his] study uses the most comprehensive set of control variables yet used in a study of crime.” Lott further explains that the correct method of argumentation is to “state what variables were not included in the analysis.” As McClurg notes, however, Lott manages to control for a dizzying array of irrelevant or redundant demographic variables, while ignoring a nearly endless list of important factors that could influence crime.
Perhaps the most glaring weakness with Lott’s paper is its lack of predictive power. It simply does not matter how complex a model is, if it lacks predictive power, it’s simply useless.
Any model that is fully capable of expressing the true impact of concealed carry laws on crime, all things equal, will also have predictive potential outside of the scope of a single study. Fortunately, Lott’s initial data set ended in 1992, permitting researchers to test
Lott’s own model with new data.
Researchers
Ian Ayres, from Yale Law School, and John Donohue, from Stanford Law School, did just that, and examined 14 additional jurisdictions between 1992 and 1996 that adopted concealed carry laws. Using Lott’s own model, they found that these jurisdictions were associated with more crime in all crime categories. In other words, “More Guns, More Crime.” Ayres and Donohue conclude with the rather damning paragraph, “Those who were swayed by the statistical evidence previously offered by Lott and Mustard to believe the more guns, less crime hypothesis should now be more strongly inclined to accept the even stronger statistical evidence suggesting the crime inducing effect of shall issue laws.”
John Lott, along with two other young researchers,
Florenz Plassmann and John Whiteley, wrote a reply to Ayres and Donohue attempting to confirm the “more guns, less crime” hypothesis. However, when Ayres and Donohue
examined Lott’s reply,
they discovered numerous coding errors and empty cells that, when corrected, showed that RTC laws did not reduce crime and in some categories even increased it. In a latter email exchange with Tim Lambert,
Plassmann even admitted that correcting the coding errors caused his paper’s conclusions to evaporate. Eventually, Lott removed his name from the final paper,
citing disagreements over edits (which turned out to be
a conflict over a single word) that had been made to Ayres and Donohue’s paper.
(
Source)
The above is just the tip if the iceberg. [5] Click the source link just above, along with these....
...to read yet more ways in which Lott's approach and resultant analysis is grossly flawed.