Ten Practical Principles for Policy and Program Analysis
Expert InsightsPublished Jun 9, 2025
Alain C. Enthoven is Marriner S. Eccles Professor of Public and Private Management, Emeritus, at Stanford Graduate School of Business. His association with RAND began in 1955; from there, he went on to a long career in government and academia. RAND’s graduate school presented him with an honorary doctorate in 2008, and he returned briefly in 2017 to serve as faculty in residence.
I first wrote this in 1974 for an anthology on benefit-cost and policy analysis.[1] In 2017, while I was a visiting scholar at the Pardee RAND Graduate School (now known as the RAND School of Public Policy), then-Dean Susan Marquis suggested that I prepare an update for a republication by RAND.
When the discipline of policy and program analysis is finally codified, I hope that the following practical principles will not be overlooked.
Good analysis is the servant of judgment, not a substitute for it
Deciding any serious question of policy choice requires judgments about values, uncertainties, and intangibles. A responsible executive will usually believe that he, and not the analysts on the staff, should make the key judgments. However, he cannot be expected to think up personally what all the really important questions are.[2] That is why he needs a policy analysis. A good analysis should help the decisionmaker by explaining how the choice depends on key judgments rather than by trying to tell him what the answer is. A good analysis will search out and highlight the key questions of value, the uncertainties, and the intangibles and not bury them.
If an analyst is asked to make a recommendation, it is not inappropriate for him to do so. The credibility of recommendations will usually be enhanced by an explicit statement of the judgments on which it is based. However, the object of analysis should, in general, not be only to produce policy recommendations. Rarely are analyses broad enough to support recommendations rigorously. Usually, the recommendations are less valuable and less interesting than the content of the analysis: the way the problem is posed, the alternatives invented or designed, the data collected and evaluated, and the criteria used.
This point deserves emphasis because so much of the literature on decision theory describes how to find the best answer, given certain input data and assumptions, rather than emphasizing finding out how answers depend on assumptions. A good analysis will work the models backward and forward, i.e., from answers back to underlying assumptions, as well as from assumptions to answers, to clarify the relationships between assumptions and answers. A good analysis will include sensitivity tests and break-even calculations that tell the decisionmaker which assumptions really matter and which do not. The importance of this goes far beyond the usual notion of sensitivity analysis, that is, varying parameters in a given model to see how the variations affect the outcome or to demonstrate that the “best” answer is insensitive to minor variations in assumptions. A good analysis will search out hidden or implicit assumptions that can have a large bearing on the outcome. Often, the way the questions are asked or the objectives are stated can have a great deal to do with the outcome. A good analysis will develop such insights.
The best analyses highlight which assumptions, initial judgments, and valuations lead to which conclusions—and how. The best analyses will define with greater precision than otherwise available the judgments the decisionmaker must make and the implications of his judgments. Of course, the decisionmaker may have made his goals and values clear to the analysts, in which case they do not need to be repeated in every analysis.
Analysis should be open and explicit; it is not a substitute for debate but should provide a framework for constructive debate
Every policy analysis should be spelled out clearly and explicitly, and the backup material should be made available to all interested parties so that they can examine the calculations, data, and assumptions leading to the conclusions. In fact, a deliberate objective of the presentation of an analysis should be to describe all calculations, data, and assumptions in such a way that they can be checked, tested, criticized, debated, and (possibly) refuted. Policy analysts should not want or expect the results of their analyses to be accepted blindly simply because they appear at the end of an impressive-looking document called a study. Rather, analysts should seek to provide a common intellectual framework various parties with divergent views can use to think through the problem.
Open and explicit analysis is a valuable principle for several reasons. First, no procedure is proof against error. Open and explicit analysis is likely to be our best protection against persistence in error and reaching conclusions on the basis of hidden assumptions. Some agencies have successfully used adversarial proceedings based on analytical principles as a way of generating thoroughly evaluated analyses for top officials. For example, this was one of the foundations of Secretary Robert McNamara’s management system in the U.S. Department of Defense (DoD).[3] If, for example, an analysis of airlift and sealift requirements for movement of the U.S. Army was thoroughly checked by each of the military services and the Joint Chiefs of Staff and by civilian analysts in the Office of the Secretary of Defense and Bureau of the Budget—each with divergent and to some extent opposing viewpoints—and no significant errors or unresolved issues were found, the Secretary of Defense could be confident that the problem had been thought through and that the analysis was a reliable basis for decisionmaking. Thus, open and explicit analysis helps build confidence in the soundness of a study’s conclusions and acceptance of its results.
Second, no one discipline has all the knowledge or skills needed for policy analysis. A good analysis must integrate information developed by experts in many different fields. Thus, a good analysis should make it possible for each specialist to see how information he provided contributed to the end results.
Analysis is not a substitute for debate. A good analysis provides a framework for a reasoned, constructive, and (hopefully) convergent debate. Analysis can be helped by debate. In fact, debate between bureaucracies with opposing interests can often provide the stimulus and motivation for development of important analytical advances. For example, the analyses of strategic offensive and defensive force requirements done in 1961 in the various DoD offices seemed crude and naïve by the standards of four years later. The early analyses took the estimates of opposing Soviet forces as a given, independent of our own decisions. Later analyses, developed under the pressure of debate over the allocation of the defense budget, brought out the importance of Soviet reactions to our force decisions and served as the foundation for Secretary McNamara’s case against deployment of a full-scale antimissile defense of our cities. But the early analyses were a necessary step to the later ones. And they served the useful purpose of putting the burden on their critics to specify which factors were left out and how they should be included.
Do not force your problem into an optimizing model; there usually is no “best” answer, but you do not have to be able to define the best answer to identify and avoid bad ones
In the world of public policy analysis, there is rarely a “best” solution to most problems because there is no single universally valid set of assumptions and may be no agreement on values. There are good and bad answers, better and worse. Avoiding bad answers is an ambitious enough goal for most policy analyses.
Most mathematical decision theory models are procedures for finding an optimum, given certain assumptions and a criterion. The optimizing model is usually inappropriate because there are many criteria. A better model is satisficing, that is, trying to find solutions that are satisfactory with respect to the various important criteria and under many sets of assumptions.
An important corollary of this principle is that you do not have to be able to define the “best” answer to be able to identify bad or wrong answers.
A frequent form of bad answer is the flat of the curve solution. For example, to what level should automobile emissions be reduced to improve air quality? Put alternatively, how much of its resources should society allocate to obtain cleaner air in this way? It has proved to be difficult to relate alternative emission levels to ambient air quality and to relate air quality to human health and well-being. But, in principle, an analyst might attempt to plot a graph relating expenditures on emission reduction to indices of air quality. The likely result would be a curve that rises sharply at first as the most effective measures (in relation to cost) are employed but that gradually flattens out as more costly technologies must be employed to reduce the smaller remaining amounts of emissions. While such curves would not show any single best point to which we should reduce emissions, they would show that successive improvements in air quality would become increasingly costly. A judgment would have to be made balancing the benefits of further improvements in air quality against the benefits obtained from other uses of the same resources. But had such an analysis been available at the time of the passage of the Clean Air Amendments of 1970, it might have helped Congress make a much more-informed decision about where the curve became too flat to justify further expenditure. Such a curve would not have told Congress the best answer, but it might have helped identify some bad ones.
Keep it simple
Even the most intelligent but busy decisionmakers must work in terms of simple ideas, both in reaching their own conclusions and in communicating them to others. If you cannot provide the user of your analysis with a clear statement of the essentials of the problem, you are forcing him to do the job for himself—without the benefit of all the understanding you have developed by doing the analysis. The important thing is to be sure the simple ideas are valid insights and not the wrong simplifications or oversimplifications.
You may have to go through some terribly complex analysis to achieve the key insights. But the job is not done until you come out the other side and are able to explain the essentials of the problem in clear, understandable terms. If you have not been able to distill from your results a set of simple propositions, you have not thought the problem through. A good analyst will fit the analytical tools to the problem at hand and use the simplest tools that will fit the problem. He will emphasize definition and solution of the problem with whatever tools are most appropriate rather than emphasizing the application of a preconceived methodology. Complex mathematical and computerized methods certainly have their place, especially when many quantifiable factors and numerous calculations are involved. But it is impressive how much can be done with the simplest tools of analysis.
In many problems in which numerous detailed calculations are used, the results can be approximated quite well by simple calculations based on averages. And the simple models can be used to develop insight and to do gross sensitivity tests. A psychological advantage of the simple methods is that they are more likely to focus the user’s attention on the assumptions and the relationship of assumptions to outcomes rather than on the intricate details of the calculations.
In the mid-1960s, the Systems Analysis team produced a 40-page draft memorandum for the President for Secretary McNamara. It was a very thorough analysis of the many complex issues involved in a decision of whether to deploy an antiballistic missile (ABM) defense system to defend our cities, at a cost of many tens of billions of dollars. Secretary McNamara thanked me and praised the work of Systems Analysis. He obviously had studied the document carefully. “This is an excellent report. Now I feel I understand the issue. Now I have one more request: to write it in three pages.”
“But Bob,” I replied. “This is a very complex and important issue.”
“I know that,” he said. “But the president won’t read it if it is longer than three pages.”
It was an important learning experience for me. I did want the president to read it, and I came to appreciate the many complex issues he had to deal with. So I stepped back, thought about the big picture, and asked, “what were the most essential points?” As I recall, with help from the Systems Analysis Strategic team, I did produce the three pages. They focused on the most essential point: The Soviets had the technical and economic resources to deploy countermeasures, like ours. That would render our system ineffective. And such a system would be a serious threat to their deterrent against U.S. attack. So it was very likely they would deploy the countermeasures.
It is better to be roughly right than exactly wrong
Getting the basic facts roughly right is the most important, difficult, and underrated problem in policy analysis. Many examples of failure in policy formulation can be traced to a failure to understand the basic facts correctly. The world abounds in misinformation. Most people are too busy to check out the facts. Unfortunately, policy analysts are taught a great deal about how to process presumably valid information to produce policy recommendations but little to nothing about how to test the information for validity. Perhaps the reason for this is that the former is subject to all sorts of elegant formulations of great interest to the readers of learned journals, but it is hard to say much of general interest about the latter.
One of the most celebrated examples of this is the myth that dominated the thinking of North Atlantic Treaty Organization (NATO) military and political leaders for two decades: that the land forces of the NATO Alliance were hopelessly outnumbered by those of the Warsaw Pact, some 175 Pact divisions compared to 25 for NATO. Countless policy analyses were based on this premise. Yet, at least through the 1960s, NATO actually had more soldiers and more military vehicles than the Pact. But there are many other examples. U.S. policy decisionmaking about Vietnam in the 1960s was badly flawed by a lack of reliable basic information about what was going on.
Policy analysts should focus their efforts on being sure that their most important data are roughly right rather than on undertaking refined manipulations of the data on the assumption that it is accurate.
How can you be sure you have the basic facts roughly right? There is no simple answer. Just being aware of the problem can make a lot of difference. Here are some suggestions.
First, thinking of the right questions and being willing to pursue them is much of the battle. If you cannot think of good questions, ask around. Perhaps others can.
Second, you cannot possibly check everything. You have to believe some things. So focus on essentials. Do sensitivity analyses (i.e., test the sensitivity of the outcome to variations in the input data). Decide which are the few decisively important pieces of information, and concentrate on them.
Third, data depend on definitions and assumptions. The data may be sensitive or insensitive to small changes in the definitions on which they are based. Good data are answers to precisely formulated questions. To what question are your data the answer? For example, what is the cost in federal expenditures of a proposed national health insurance law? Compared to what? Compared to the most probable level under existing legislation? Or compared to an alternative law? Assuming what benefit package? How defined? How administered? And assuming what patterns of response by consumers and providers of health care (e.g., elasticities of supply and demand)? In general, the cost of a program can only be estimated in comparison with a precisely defined alternative. Be sure you know the important assumptions and definitions underlying your data.
Fourth, where possible, develop independent sources of information. Try to test your information by triangulation.
Fifth, look for contradictions. Do simple analyses of different parts of the story to see how they fit. For example, in attacking the myth of the 175 Soviet divisions, defense analysts noted that, in the early 1960s, the Army was spending $2.2 billion per year to equip 22 divisions; at that rate, the Soviets would have to be spending $17.5 billion, far in excess of what independent sources indicated was possible. There was a major discrepancy. Also, our active army had about 1 million soldiers on active duty. The Central Intelligence Agency estimated that the Soviet Army was about 2 million men.
In the famous Equity Funding Life Insurance scandal of the 1960s, one executive who was not in on the fraud figured out that something was wrong by making such cross-checks. For example, he was suspicious of the total sales reported by the company. He got a reliable sales figure for the most productive of the company’s five regions, multiplied it by five, and got a number far below the company’s reported total sales.
Sixth, try to spot-check key data back to their original or foundational sources. How were they obtained? What measurements were made? How were those measurements processed to produce your data?
Seventh, consider the incentives that those providing information face and their biases. Are they under pressure to report progress or improvements over the previous period? Consider the procedures that produced the information. Were there independent checks?
Eighth, if you are caught up in a system, reach outside for data. Do not just check internal consistency. Some auditors could have saved themselves a lot of embarrassment in the Equity Funding scandal if they had taken the trouble to run an independent mail or telephone check of a sample from the company’s list of alleged policyholders.
Always start by looking at the grand totals
I like to call this “McNamara’s First Law of Analysis” because Secretary McNamara often invoked it when someone presented him with a partial picture of a problem. I described it in 1969 this way:
Whatever problem you are studying, back off and look at the overall context. Don’t start with a small piece and work up—look at the grand total first and then break it down into its constituent parts. Thus, if cost is the issue, look at total system cost over the useful life of the system, not just this year’s procurement costs. If you are analyzing a particular strategic offensive weapon system, start by looking at the total strategic offensive forces.
I can remember a Navy briefing to the Secretary of Defense on the Polaris submarine-launched intermediate-range ballistic missile program in 1961. In a very orderly way it laid out the targets to be attacked, the probabilities of destroying each of the various targets, the number of missiles on station and the number of submarines, and therefore, the total force required—a very fine job on a small piece of the problem. The trouble with the briefing was that throughout the whole analysis of requirements for Polaris, there was not one mention of our bomber force or our land-based intercontinental-ballistic-missile force. But one can’t make sense out of how many Polaris submarines we ought to have without looking at our total strategic offensive forces.[4]
The same principle is equally relevant in such fields as pollution control and health care. An analyst who ignores this principle in evaluating water pollution control programs may find that his preferred solution only transforms the problem to one of air pollution by sending the waste up the smokestack. To make sense out of a piece of the problem of pollution, it is usually necessary to pay some attention to the total problem. Analyses of hospital behavior are often flawed by the assumption that hospitals are autonomous decisionmaking units that somehow act on their own best interests. In fact, hospitals compete for doctors who bring them patients, and one cannot understand hospital behavior except as part of a larger system that includes the doctors.
Consider the basis on which decisions are actually being made, and ask whether you can improve on it
A good analysis will usually begin by seeking to develop an intellectually satisfying basis for deciding the problem at hand. That search is appropriate and commendable and sometimes will be rewarded. But sometimes it will be fruitless. Sometimes it will lead to the paralysis that can come from facing unanswerable questions. In such cases, it may be more productive to examine the basis on which decisions are now being made and to seek ways of improving on it. I learned this from an experience in DoD in the 1960s.
For years, the Systems Analysis office struggled with the question, “How much tactical air power is enough?” In the early 1960’s, we reasoned that the effectiveness of the classic tactical air missions—air superiority, close air support, and interdiction—could be measured by their impact on the force ratio between opposing land forces, and thus that the land/air “trade-off” would be a decisive factor in sizing U.S. tactical air forces. Approaching the problem in the manner of an economist or operations analyst, we tried to develop trade-off curves for land and air forces yielding the same effectiveness.
Put another way, suppose that the United States were to spend an additional billion dollars to buy and operate tactical air forces that would destroy and disrupt enemy troop and supply movements deep behind his lines, thus limiting his ability to sustain operations at the war front. Suppose also that by expenditure of $1 billion on tactical air, we were able to reduce by fifty thousand the number of personnel the enemy could support at the front. Then, would our land forces be better off, thanks to that enemy force reduction by our tactical air forces, than they would have been if we had spent the additional billion dollars to provide more land forces? Essentially, that is the question the Systems Analysis office spent years trying to define, document, and analyze in the hope of supporting a reasoned judgement. Unfortunately, we were unsuccessful. We simply could not find the relevant data with which to calculate how much better the United States would do if it had another wing of tactical air in a particular theater. Nor could we get a reasoned judgement from the military experts, based on the available data. Their conclusions were reached by inter-Service negotiation rather than by analysis. One major close-air-support study in the early 1960’s, involving the Army and the Air Force, reached the informative conclusion that we needed more close air support and more land forces. Joint studies in which the Systems Analysis office participated became prolonged debates over basic assumptions and facts. Without agreement on the basic input factors, it was impossible to derive usable results about possible land-air trade-offs.
After blunting our lance for several years on the land/air trade-off problem, we realized that the actual decision making was being based on much simpler reasoning, such as a comparative count of enemy aircraft versus ours, and that this count was wrong. It was wrong, first, because it compared a number close to the total inventory of our potential enemies (the Air Order of Battle) with only a fraction of our own inventory (the Unit Equipment, or that portion of the forces nominally assigned to combat units.) It was also wrong because it ignored qualitative differences between our aircraft and theirs. For example, a high percentage of Soviet aircraft were defensive interceptors, while a high percentage of ours were multipurpose fighter-bombers with good offensive capability. So, in 1964 the Systems Analysis office switched from trying to develop a sophisticated solution to the total tactical air problem to just getting the numerical counts straight and to developing effectiveness indicators that would take account of the expensive qualitative advantages being built into U.S. aircraft. In retrospect, it is clear that we should have made this switch sooner.[5]
Do not bog down trying to answer the unanswerable: “How much should we spend for medical care?” “Can we devise ways to get the same benefits at less cost?” “How do people value their lives?” “Can we rearrange existing patterns of spending to save more lives?”
It is important to recognize that good analysis of complex policy problems takes time. It is a gradual learning process. It may take years, perhaps a decade or more, for a team of good analysts to achieve a satisfactory understanding of a large and complex problem. But usually, practical results are needed along the way. Thus, improvements to today’s basis for decisionmaking may be both practical contributions now and steps to a more satisfactory analysis tomorrow.
A good analysis includes design of new alternatives: A good analyst will go beyond the question asked to see whether more-fundamental questions need to be answered first
The original terms of reference for an analysis may specify the evaluation of two or several alternatives. Sometimes the alternatives given will reflect the limited knowledge of the party asking the question; sometimes they will reflect bureaucratic politics or a desire to manipulate the outcome. I can make Alternative A a sure winner if I can limit the other alternatives to B and C. A good analyst will insist on his duty to use his analytical results to design and evaluate alternatives other than the ones initially specified, alternatives that may be better than the starters. Indeed, the most valuable products of a good program analysis may be insights into how to design better programs. A design of a good new alternative is likely to be worth a lot more than a thorough evaluation of some unsatisfactory old alternatives.
Similarly, the questions originally asked will reflect the limited knowledge of the party asking them. Often, figuring out the right question or at least good questions is most of the job of policy analysis. (Getting the basic facts right is also 90 percent.) While answering the question asked, a good analyst will seek out the more fundamental questions that need to be answered first.
Here, beginning with a complete goal set becomes especially important. The practice in policy analysis is often to begin by stating a problem: There is “too much” pollution, or there are “too few” children graduating from high school on time. In these cases, the goal is to have tolerable levels of pollution and high levels of high school graduation. But that is not always the case—sometimes the goals are not obvious, and the analyst must dig deep to uncover them. Like an architect or aerospace engineer, you must know what capabilities you are designing for before you begin. With nuclear strategy, it was assumed for a long time that the goal was to deter the Soviets from attacking our NATO allies, so there was a policy of trip wire and massive-retaliation. But when you consider the goals more closely, you determine that the goal is in fact to avoid a nuclear war without surrendering—and trip wire was very dangerous because all it took was a small Soviet incursion into NATO territory to trigger a nuclear strike no matter how disproportionate. President John F. Kennedy criticized this strategy by saying that it presented him with a choice between suicide and surrender, so he demanded better alternatives. With avoidance of a nuclear war without surrender as the goal set, the strategy became, in part, to build up nonnuclear forces in Europe so that, if war did break out, it was more likely to be fought successfully with nonnuclear weapons. With adequate conventional forces, reliance on the threat of first use of nuclear weapons could be avoided.
Or, for another example, it is often said in public education that the goal is not to have an achievement gap. But if you dig deeper and consider that the goal is for all students to achieve at the best of their ability, it becomes apparent that this requires personalized learning, and personalized learning cannot be facilitated through the current design of schools. So, a key task of policy is to facilitate innovation.
In some cases, it will be relatively easy to achieve one or two goals of a system but not the complete goal set. For example, in health care, the goal set could be said to be better health at a price individuals and society can afford, with care available for everyone. Many strategies can achieve two of the goals—quality care available to all, but unaffordable, or affordable care that is of low quality—but achieving all three is the task. Fidelity to a complete goal set will often lead a policy designer to novel solutions.
Do not overemphasize the quantitative aspects, and do not ignore nonquantifiable factors that may be of decisive importance
There is a danger that an analyst, or anybody else for that matter, may become so intrigued by a quantitative model that elegantly relates some elements of a problem that he ignores intangible factors of great importance. Perhaps a person trained in quantitative methods is more prone to this error than others would be because numbers are an important part of his vocabulary, and quantitative relationships are an important part of his thought processes. (Similarly, a person not trained to deal with quantitative relationships may be prone to undervalue them.) A good analysis will identify the most important factors in a problem, whether they can be represented numerically or not. A good analysis of the quantitative aspects of a problem will clear the decks for consideration of important intangibles. It will free the decisionmaker from the burden of doing his own arithmetic. A good analysis of the nonquantitative or intangible factors will describe as clearly as possible how they can have a significant impact on the outcome. This is likely to be more challenging than analysis of the quantitative aspects.
In a similar vein, a good analyst will recognize that many people who lack quantitative analytical skills but have experience and substantive knowledge may have the most to contribute to the understanding of a problem. He will strive to be sensitive to the insights such people offer and to communicate his results to them in clear, simple terms. He will take care to see that his analytical skills act as an aid to their judgment and not as a barrier to their participation in the policymaking process. However, he will also learn to recognize that the possessors of a particular kind of experience may have very powerful vested interests and may express judgments that best serve those interests.
For example, the United States fought the war in Vietnam in almost complete ignorance of the motives and will of the Vietnamese communists. We viewed it as part of the Cold War with the Sino-Soviet bloc and the effort to prevent the spread of communism. They viewed it as a war for independence from the colonial power, France, and did not want the United States to be the new colonial power to rule them. Perhaps, if we had understood that better, we could have found an alternative to a costly and eventually unsuccessful war effort.
Think critically and realistically about the prospects for implementation
Remember what Robert Burns had to say about the best-laid plans . . . .
A good analysis will systematically consider the problems in carrying out the various alternatives and the prospects for success. An alternative might appear best when ignoring problems of implementation but might not really be best when considering the problems of implementation. For example, will the people or organizations affected really respond as assumed? What incentives motivate them? Is the proposed course of action compatible with the institutions that must carry it out? Will the adoption of the alternative suggested by your analysis really make things better, all things considered?
The problem can be illustrated by an example from history. In 1967, I recommended to the Secretary of Defense that he approve procurement of an ABM defense system to protect our intercontinental ballistic missile (ICBM) silos from the threat of an attack by Soviet ICBMs armed with accurate multiple independently targetable reentry vehicles (MIRVs). This appeared to be a good idea because, within the span of time and range of threats of most interest, the deployment of an already-developed system to protect an already-deployed system would permit us to meet our needs for protected retaliatory power at less cost than the cost of developing a whole new system. However, the Army had for years recommended the full-scale deployment of a national ABM system to protect our cities from Soviet attack and had built its plans accordingly. The Secretary of Defense had turned down such a system because of the virtual certainty that the Soviet reaction to such a deployment would render it ineffective.
The implementation problem that was not overcome stemmed from the fact that, in the face of a decision to deploy the ABM to protect the ICBMs (plus a thin or anti-Chinese defense of the whole country), the Army could not or would not make the appropriate changes in deployment plans. Whatever might have been the intentions of individual officers, the net result was that the Army carried on as if the system the Secretary of Defense had approved was merely the first installment on the full national ABM defense of our cities that the Army considered desirable, not a limited system for protecting ICBMs. And the Army began to locate its missile launchers accordingly.
While he remained Secretary of Defense, Secretary McNamara intervened personally to carry out the intent of the decision. But his successors were busy with other problems. Several years later, it became clear that the Army was actually buying the first installment on a national system—which would have been a great waste of money. Congress then voted down the ABM, and the fact that the actual deployment was not matching the stated purpose was one of the strongest arguments against it. A deeper insight into how the Army would actually respond to the decision probably would have led to a different recommendation.
Epilogue
As I look back and reflect on all this, I would add a very important lesson usually not taught to program analysts: Study the relevant history, and reflect on its meaning for the current problem. All public policy problems have their own histories that shape where we are now, what got us to the present state. Why was this problem not solved before? Were apparently attractive policies tried and failed? Are there important and relevant lessons? The famous philosopher George Santayana said, “those who do not learn from history are doomed to repeat it.”
Also, it is very important to understand the cultures of the organizations you are dealing with, that is, the shared beliefs, goals, customs, and shared patterns of behavior of the organizations you are working with. This can have profound effects on the acceptability of your findings and the prospects for successful implementation of your recommendations.
Acknowledgment
I would like to gratefully acknowledge the contributions of Tim McDonald, who has suggested some extended rewrites.
Notes
- [1] This paper was originally published as a book chapter: Alain Enthoven, “Applications of Policy Analysis: The Contemporary Context,” in Richard Zeckhauser, ed., Benefit-Cost and Policy Analysis Annual 1974, Aldine Publishing Company, 1975. I updated the content in 2017 while serving as Scholar in Residence at the RAND School of Public Policy.
- [2] One of the first issues for an update is gender pronouns. I originally used the male pronoun he for responsible executives because, in 1974, the great majority of executives were men. That has changed in the past four decades, a change I welcome. I could write he or she or she/he or (s)he or they, but these formulations are awkward, sometimes ungrammatical, so I decided to stick with he and hope that readers will include both genders when I use the male pronoun.
- [3] See Alain C. Enthoven and K. Wayne Smith, How Much Is Enough? Shaping the Defense Program, 1961–1969, Harper and Row, 1971, Ch. 2.
- [4] Alain C. Enthoven, “The Planning, Programming, and Budgeting System in the Department of Defense: Some Lessons from Experience,” The Analysis and Evaluation of Public Expenditures: The PPB System, A Compendium of Papers Submitted to the Subcommittee on Economy in Government of the Joint Economic Committee, Congress of the United States, Vol. 3, Washington, 1969, p. 904.
- [5] Alain C. Enthoven and K. Wayne Smith, How Much Is Enough? Shaping the Defense Program, 1961–1969, Harper and Row, 1971, pp. 216–217.
Topics
Document Details
- Copyright: RAND Corporation
- Availability: Web-Only
- Year: 2025
- DOI: https://doi.org/10.7249/PEA3956-1
- Document Number: PE-A3956-1
Citation
RAND Style Manual
Chicago Manual of Style
The revised publication was supported by the RAND School of Public Policy and by the RAND Systems Transitions Applied Research (STAR) Initiative, as well as by gifts from generous RAND donors and income from operations.
This publication is part of the RAND expert insights series. The expert insights series presents perspectives on timely policy issues.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.