Tuesday, November 25, 2014

The Origins of the Value of a Statistical Life Concept

Like most economists, I sometimes find myself defending the crowd-pleasing position that it's  possible for public policy purposes--and even unavoidably necessary--to put a monetary value on human life. For example, the current value of a "statistical life" used by the U.S. Department of Transportation is $9.1 million. If someone wants to try to understand this kind of calculation, rather than just railing at economists (and me in particular), a useful starting point is to consider the origins of this concept.  H. Spencer Banzhaf explains in "Retrospectives: The Cold-War Origins of the Value of Statistical Life," in the Fall 2014 issue of the Journal of Economic Perspectives (28:4, 213-26). (As with all articles in JEP back to the first issue in 1987, the article is freely available on-line compliments of the American Economic Association. Full disclosure: I've been Managing Editor of JEP since the first issue.)

Banzhaf begins his story just after RAND Corporation had become an independent organization in 1948. Banzhaf explains what happened in one of its first big contracts (citations omitted):

The US Air Force asked RAND to apply systems analysis to design a first strike on the Soviets. ....  Paxson and RAND were initially proud of their optimization model and the computing power that they brought to bear on the problem, which crunched the numbers for over 400,000 configurations of bombs and bombers using hundreds of equations. The massive computations for each configuration involved simulated games at each enemy encounter, each of which had first been modeled in RAND’s new aerial combat research room. They also involved numerous variables for fighters, logistics, procurement, land bases, and so on. Completed in 1950, the study recommended that the United States fill the skies with numerous inexpensive and vulnerable propeller planes, many of them decoys carrying no nuclear weapons, to overwhelm the Soviet air defenses. Though losses would be high, the bombing objectives would be met. While RAND was initially proud of this work, pride and a haughty spirit often go before a fall. RAND’s patrons in the US Air Force, some of whom were always skeptical of the idea that pencil-necked academics could contribute to military strategy, were apoplectic. RAND had chosen a strategy that would result in high casualties, in part because the objective function had given zero weight to the lives of airplane crews. 
RAND quickly backpedalled on the study, and instead moved to a more cautious approach which spelled out a range of choices: for example, some choices might cost more in money but be expected to have fewer deaths, while other choices might cost less in money but be expected to have more deaths. The idea was that the think tank identified the range of choices, and the generals would chooose among them. But of course, financial resources were limited by political considerations, and so the choices made by the military would typically need to involve some number of deaths that was higher than the theoretical minimum--if more money had been available. In that sense, spelling out a range of tradeoffs also spelled out the monetary value that would be put on lives lost.

In 1963, Jack Carlson was a former Air Force pilot who wrote his dissertation, entitled “The Value of Life Saving,” with Thomas Schelling as one of his advisers. Carlson pointed out that a range of public policy choices involved putting an implicit value on a life. Banzhaf writes:

Life saving, he [Carlson] wrote, is an economic activity because it involves making choices with scarce resources. For example, he noted that the construction of certain dams resulted in a net loss of lives (more than were expected to be saved from flood control), but, in proceeding with the projects, the public authorities revealed that they viewed those costs as justified by the benefit of increased hydroelectric power and irrigated land. ...  
Carlson considered the willingness of the US Air Force to trade off costs and machines to save men in two specific applications. One was the recommended emergency procedures when pilots lost control of the artificial “feel” in their flight control systems. A manual provided guidance on when to eject and when to attempt to land the aircraft, procedures which were expected to save the lives of some pilots at the cost of increasing the number of aircraft that would be lost. This approach yielded a lower bound on the value of life of $270,000, which Carlson concluded was easily justified by the human capital cost of training pilots. (Note the estimate was a lower bound, as the manual revealed, in specifying what choices to make, that lives were worth at least that much.) Carlson’s other application was the capsule ejection system for a B-58 bomber. The US Air Force had initially estimated that it would cost $80 million to design an ejection system. Assuming a range of typical cost over-runs and annual costs for maintenance and depreciation, and assuming 1–3 lives would be saved by the system annually, Carlson (p. 92) estimated that in making the investment the USAF revealed its “money valuation of pilots’ lives” to be at least $1.17 million to $9.0 million. (Although this was much higher than the estimate from the ejection manual, the two estimates, being lower bounds, were not necessarily inconsistent.)
Thomas Schelling (who shared the Nobel prize in economics in 2005 for his work in game theory) explicitly introduced the "value of a statistical life" concept in a 1968 essay called “The Life You Save May Be Your Own” (and thus re-using the title of a Flannery O'Connor short story), which appeared in a book called Problems in Public Expenditure Analysis, edited by Samuel B. Chase, Jr. Schelling pointed out that the earlier formulations of how to value a life were based on the technical tradeoffs from the costs of building dams or aircraft, and the judgements of politicians and generals. Schelling instead proposed a finesse. The value of a life would actually be based on how consumers actually react to the risks that they face in everyday life. Schelling wrote:

"Death is indeed different from most consumer events, and its avoidance different from most commodities. . . . But people have been dying for as long as they have been living; and where life and death are concerned we are all consumers. We nearly all want our lives extended and are probably willing to pay for it. It is worth while to remind ourselves that the people whose lives may be saved should have something to say about the value of the enterprise and that we analysts, however detached, are not immortal ourselves."
And how can we observe what people are willing to pay to avoid risks? Researchers can look at studies of the extra pay that is required for workers (including soldiers) to do exceptionally dangerous jobs. They can look at what people are willing to pay for safety equipment. Policy-makers can then say something like: "If people need to be paid extra amount X to avoid a certain amount of risk on the job, or are willing to pay an extra amount Y to reduce some other risk, then the government should also use those values when thinking about whether certain steps to reduce the health risks of air pollution or traffic accidents are worth the cost." Banzhaf writes:
Schelling’s (1968) crucial insight was that economists could evade the moral thicket of valuing “life” and instead focus on people’s willingness to trade off money for small risks. For example, a policy to reduce air pollution in a city of one million people that reduces the risk of premature death by one in 500,000 for each person would be expected to save two lives over the affected population. But from the individuals’ perspectives, the policy only reduces their risks of death by 0.0002 percentage points. This distinction is widely recognized as the critical intellectual move supporting the introduction of values for (risks to) life and safety  into applied benefit–cost analysis. Although it is based on valuing risk reductions, not lives, the value of a statistical life concept maintains an important rhetorical link to the value of life insofar as it normalizes the risks to value them on a “per-life” basis. By finessing the distinction between lives and risks in this way, the VSL concept overcame the political problems of valuing life while remaining relevant to policy questions.
Thus, when an economist or policy-maker says that a life is worth $9 million, they don't mean that lots of people are willing to sell their life for a $9 million check. Instead, they mean that if a public policy intervention could reduce the risk of death in a way that on average would save one life in a city of 9 million people (or alternatively, reduce the risk of death in a way that would save 10 lives in a city of 900,000 people), then the policy is worth undertaking. In turn, that willingness to pay for risk reduction is based on the actual choices that people make in trading money and risk.