Daniel Sutter: Are we running out of resources?

Thanksgiving began as a celebration of Nature’s bounty. Nature’s bounty includes natural resources. Despite reports to the contrary, Cato Institute research demonstrates that we still have plenty of natural resources. Human ingenuity and nature’s generosity explain why. That we must run out of oil, natural gas, and other resources seems obvious. Since we cannot manufacture deposits of oil, copper, zinc or other resources, these must surely get used up one day, right? News stories repeat this refrain. Fifteen years ago, news abounded of the end of cheap oil. We appeared to be running out of oil and natural gas during the energy shortages of the 1970s. Oil reserves were supposed to be gone by 2013. Yet we still have plenty of energy and minerals; U.S. oil production hit an all-time high in 2018. What happened? I’ll consider two factors. Reported resource reserves are proven reserves, or deposits of a known location, size, and quality. Dividing proven reserves by annual use gives the number of years of oil, copper, or whatever remaining. We have an estimated 53 and 46 years of oil and copper left. Proving the location and quality of reserves takes work. As economist M. A. Adelman emphasized, proven reserves are produced. Investing in proving reserves not needed for 100 years will lose money. We have found only a tiny fraction of the resources estimated to be in the Earth’s crust. New reserves will be found as existing ones are used. We might have 50 years of reserves remaining for decades. New and better methods of extraction increase effective reserves. Horizontal drilling and hydraulic fracturing have unlocked shale oil deposits. Earlier steam injection increased production from existing fields. Second, things only become resources when people figure out how to use them to produce goods and services. Saudi Arabia’s oil deposits generated no wealth for centuries. Knowledge is the ultimate source of value in our economy, and the mind is the source of knowledge. As economist Julian Simon put it, humans are the “ultimate resource.” Usually more than one formula or process can produce a good. When we are cooking, we can usually substitute for a missing ingredient and produce a tasty dish. We can use less of a resource if needed, or substitute something else; in the 1800s, people switched from whale oil to kerosene for lighting homes. We need not run out of resources because we can use alternatives if s specific mineral or fossil fuel runs out. Because reserves poorly measure resource availability, Cato’s index uses prices instead. Economic theory tells us that prices should reflect the best guesses concerning future discoveries, improvements in extraction, and emerging substitutes. If we are truly running out of something, its price should increase sharply. This was the basis for Julian Simon’s bet with Stanford University biologist Paul Ehrlich. In 1980, Simon let Ehrlich pick five resources that he thought were most likely to become depleted. Ehrlich selected chromium, copper, nickel, tin and tungsten; by September 1990, the prices had fallen and Simon won. The new Cato measure is accordingly called the Simon Abundance Index and uses fifty commodity and resource prices. Price comparisons over time require adjustment, most importantly for inflation. But since earnings rise in a growing economy, the Index also adjusts for income. This puts commodity prices in terms of time, say the number of hours of work required to buy ten gallons of gas. Simon Index prices fell 65 percent between 1980 and 2017 adjusting for inflation and earnings. When adjusting only for inflation, prices fell 36 percent. Over these years, world population increased by more than three billion persons. Markets found enough new reserves to accommodate population growth. Limits exist to Nature’s bounty, our ability to harvest this bounty, and for substitutes for resources. And we must consider fossil fuels’ impacts on pollution and climate change. Still, the Simon Index shows that we are not running out of resources. Because knowledge creates natural resources, we can potentially maintain a growing economy for generations to come. Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.
Daniel Sutter: The good and bad economics of plea bargains

Plea bargains let persons accused of crimes plead guilty and receive reduced charges or a reduced sentence. Although some people find the reduced criminal incentives offensive, this bargaining makes economic sense. But our mass incarceration illustrates a limit of the economic argument. Our criminal justice system extensively employs pleas; 97 percent of criminal convictions result from such bargains. Although TV dramas focus on jury trials (and particularly defense lawyers like Matlock or Perry Mason), trials are rare. Plea bargains make economic sense because trials are costly. Trials require courtrooms, lawyers, judges, court reporters, bailiffs and juries. Witnesses must come to court to testify. A guilty plea saves these costs. Why should defendants ever plead guilty and willingly agree to go to prison and waive their right to an appeal? A plea deal must offer defendants a better deal than conviction at trial. In a murder case, for example, prosecutors might agree not to seek the death penalty. A plea bargained conviction ensures at least some punishment for a crime and helps deter crime overall. This bargaining situation parallels labor strikes. Strikes are costly: workers miss paychecks, factories lie idle, and businesses might permanently lose customers. Both labor and management are worse off than if they agreed to the same contract with no strike. Strikes represent bargaining failures. The economic model of bargaining predicts that plea deals should reflect the strength of the evidence. The prosecution will not give much with an open-and-shut case, but the defense attorney should recognize this and counsel shaving a few years off the sentence. If important evidence gets suppressed or a witness recants their testimony, the shaky case makes prosecutors agree to a reduced charge. Actual guilt or innocence is secondary in the bargaining model to the likelihood of conviction at trial. While we might hope that innocent defendants always get acquitted, wrongful convictions happen, especially with overworked and underfunded public defenders. An innocent person should consider a deal if they look guilty enough. Emotions, not logic, might explain an innocent person’s refusal to plead guilty. We must move past the fantasy that only a guilty person would ever plead guilty. Building criminal justice almost exclusively around plea bargaining has negative consequences. These highlight the limits of the economic focus on trial costs. Plea bargains enable incarceration on the American scale, with over 2.3 million persons behind bars as of 2016. Whether you think our current incarceration rate is repressive or responsible for the significant drop in crime over the past three decades, mass incarceration could not happen without low cost plea bargains. The constitutional right to a speedy trial would be violated without guilty pleas. Conversely, speedy adjudication would require many more judges, trial lawyers, and court rooms. Another negative of plea bargaining is adding charges to encourage a deal. This is largely necessary. Suppose that the fair sentence for a crime is ten years. If this is the max sentence at trial, a defendant will only accept a plea for a shorter sentence. Prosecutors need to threaten twenty years to induce a plea to ten years. This practice has received attention in the ongoing college admissions bribery case. Actress Lori Loughlin and the other parents refusing plea deals were recently hit with additional bribery and conspiracy charges. Finally, pervasive plea bargaining might undermine the quality of criminal evidence generally. Cross-examination uncovers mistakes, lies, and bogus theories, but only at trial. If over 90 percent of convictions come from pleas, the evidence need only be strong enough to induce a deal, not to withstand cross-examination. Sloppy and faked drug tests in two different Massachusetts crime labs recently led to 47,000 convictions being thrown out. Weak evidence also increases the likelihood of innocent people being accused and forced to plead guilty. We may wish to blame “the system” for plea bargaining’s problems, but ultimately we fail to provide sufficient resources for more trials. This makes prosecutors coerce pleas, inevitably producing miscarriages of justice. Economists contribute too, by overemphasizing the immediate cost savings from plea bargaining. Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.
Daniel Sutter: The games donors and universities play

Controversies over gifts at two SEC schools, Alabama and Missouri, have recently attracted attention. The cases highlight the tension between donor goals and university administration and relate, I think, to conservatives’ increasingly dim view of higher education. Let’s consider the Alabama case first. In June, the trustees returned the largest single gift in school history, $26 million from Hugh Culverhouse Jr. (only $21 million had then been received). Mr. Culverhouse was the school’s largest donor; the business school is named for his late father, the long-time Tampa Bay Buccaneers owner, Alabama grad, and large donor. Alabama named the law school for Mr. Culverhouse. The return occurred after Mr. Culverhouse called for an out-of-state student boycott over Alabama’s new abortion law, but was apparently due to attempts to influence law school operations, allegedly including student admissions and faculty hiring. According to emails, Mr. Culverhouse may have had good intentions. He was unhappy with the quality of candidates for a constitutional law professorship and perhaps hoped that his gift could enable a hire to boost the law school’s national profile. The resolution here seems quite honorable; if you discover that you and your partners have irreconcilable expectations, you should shake hands and part ways. The Missouri case starts in 2002 with a $5 million gift from the estate of Sherlock Hibbs to fund professorships in Austrian economics. Instead Missouri funded positions for business professors with research unconnected to Austrian economics. This is where things get fascinating. Mr. Hibbs’ will included a provision that if the money were not used to hire Austrian economists, the gift would go to Hillsdale College instead. (Note to interested donors: Troy University has a strong program in Austrian economics!) Hillsdale recently sued to force the transfer, generating publicity. Missouri produced statements signed by the professors attesting that they are adherents of Austrian economics. Donors have long accused universities of using gifts to hire conservative and free-market professors for other purposes. I think this is related to the partisan decline in confidence in higher education. A 2019 Pew Center study found that 59 percent of Republicans held a negative view of higher education versus 33 percent positive, compared with 67 percent positive and 18 percent negative among Democrats. In 2015, 54 percent of Republicans viewed higher ed positively, versus 37 percent negatively. I do not believe that ignoring conservative donor intent caused this attitude shift. It likely reflects broader social forces and our general political polarization. Using gifts from conservatives to further liberal goals enables such an attitude change. What limits should be placed on gifts, or what kinds of influence should donors be able to buy? The recent college admissions scandal, with parents paying to get their children in some of our nation’s most prestigious universities, highlights these questions. (For full disclosure, the Johnson Center at Troy is supported by donors.) Professors, myself included, typically insist that admissions, curriculum, and hiring decisions be ours alone, and for good reason. We are the experts and know the most about judging faculty qualifications or different approaches in our fields. Being hired and tenured provides us specific responsibility for these decisions. State universities largely governed by faculty will better spend our tax dollars. The decision-making authority for universities though lies with the trustees or regents, as delegated to presidents or chancellors. And state universities ultimately belong to the taxpayers. Limiting faculty government is wise as professors’ views are at least somewhat biased by self-interest, and we have difficulty recognizing when we are clinging to unreasonable positions. Furthermore, our jobs provide us relatively little feedback about the contribution of our teaching and research to society. Reasonable people can disagree about the proper extent of donor influence. But accepting money under false pretenses is not right. In a perfect world, only the offenders’ names would be sullied; in our world, reputations are collective. Trampling donor intent damages the standing of higher education. Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.
Daniel Sutter: Is Alabama a poor state?

By most measures of income, yes, but income does not account for the cost of living. Does a low cost of living offset lower income in Alabama? And is a low cost of living necessarily good? Based on Census data, Alabama currently ranks in the bottom five states for both median household and per capita income. Alabama has been in the bottom ten states on these measures for years. In 2018 Alabama ranked 46th, with less than half the cost of living in Hawaii, the highest cost of living state. Annual of income of $50,000 certainly goes further in Alabama than in Hawaii or New York. When adjusting income for cost of living, Alabama ranks in the thirties among states. Do cost of living measures truly account for differences across states? This is an intriguing question. The measured cost of living overstates and understates the full cost in some ways. Cost of living measures overstate differences in living costs due to substitution. People will buy similar goods when one increases in price relative to the others. Suppose that the price of Coke doubles while the price of Pepsi remains unchanged. Many people consider Coke and Pepsi interchangeable, or what economists call close substitutes, and will just buy Pepsi and be little affected by the price increase. Substitution applies with most goods. Consider housing, one of the biggest factors in cost of living differences. A person might rent a one bedroom apartment if they lived in New York City versus a townhouse if they lived in Montgomery. A price index must measure the prices of the same market basket of goods for an apples-to-apples comparison. Yet price differences lead consumers to substitute. Perhaps the bigger difference between high and low cost states is the difference in availability of goods and services in expensive cities like New York or San Francisco versus Alabama. For instance, a major city has a much wider variety of restaurants, including very expensive ones. Is the cost dining out higher? Yes, but dining involves eating food that is closer to your tastes. Here’s another way of considering this point. The cost of dining at one of America’s finest restaurants if you live in Alabama likely includes airfare. The cost of dining out in Alabama does not reflect prices at many fancy restaurants, giving Alabama a low cost of dining. Availability applies to museums, art galleries, and shopping in addition to restaurants. The cost of living is lower but fails to include certain options at all. Differences in availability do not impact everyone the same however. An Alabamian who does not value fancy restaurants, avant garde art, or $25 cups of coffee will not miss out. Economic statistics cannot control for such differences in tastes. Technology and innovation, specifically the internet, Walmart and Amazon, have increased rural America’s consumption opportunities relative to large cities. Alabamians and New Yorker can both now find their favorite music, books and movies online. Economic theory tells us that real estate prices should reflect all the good and bad things about a place. Anything making a community a more desirable place to live – including nice weather, recreation, and natural beauty in addition to consumption opportunities – increases the demand for housing and thus real estate prices. Things that people dislike lead to lower prices. Because not everyone agrees on the desirability of each item, real estate prices must reflect average or typical preferences. Government land use and zoning policies, however, reduce housing supply and increase prices, so price differences do not reflect desirability exclusively. On average, house prices will be lower in places where fewer people prefer to live. Economists consequently recognize the limited appeal of inexpensive housing in recruiting job candidates. Differences in the availability of goods, services and opportunities offset lower prices for common items. As a result, whether you find Alabama to be a poor state is to some degree a matter of taste. Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.
Daniel Sutter: The End of the Allentown Economy

A Remarkably Prescient Song Over 35 years ago during the 1981-82 recession, Billy Joel released “Allentown” about the plight of this Pennsylvania town. The song resonated across the Midwest during the worst economic downturn since the Great Depression. It also anticipated, I think, an important change in our economy and society. I was in high school in suburban Detroit when “Allentown,” and Mr. Joel could easily have been singing about our troubles. Unemployment in Pennsylvania topped out at 12.7 percent in the recession, while Michigan hit 16.4 percent. Something seemed fundamentally wrong. Fortunately, the U.S. economy recovered, and growth exceeded 7 percent in 1984. President Reagan’s reelection slogan, “It’s morning in America” suggested that the dark days were behind us. His tax cuts and low inflation were beginning to unleash the Reagan-Clinton prosperity. Prosperity never brought back all the auto, steel, or coal jobs. The causes were many, but the largest was automation. America manufactures as much as ever, just with many fewer workers than in the 1970s. Now artificial intelligence (AI) may automate 800 million jobs worldwide. Sources of long term, stable employment are vanishing, and AI will accelerate the automation of routinized tasks. In the future, tasks may be automated before hundreds of thousands of workers are hired to do them. This does not mean the end of work, because our wants and desires for goods and services exceed our productive capacity, and always will. Our households and businesses inevitably have tasks that we’d like done. Humans will be useful, only our jobs will be more fleeting. “Allentown” foreshadows the social impact of these changes. The lyrics state: “For the Pennsylvania we never found, For the promises our teachers gave, If we worked hard, If we behaved.” These words reflect an implicit deal: follow directions from teachers (and I would add parents and bosses), and you will be taken care of. I shouldn’t really say taken care of, because the lyrics also mention hard work. Hard work in coal mines, steel mills, and auto plants and helped create the prosperity which workers shared. The middle class, post-War America lifestyle wasn’t a gift from employers, unions, or the government, but a share of the wealth created. This deal worked because many jobs required on-the-job training but little specific expertise. The skills of a high school graduate sufficed, if people followed the boss’ directions. The gig economy and freelance writer epitomize today’s workplace, and now more people must manage their “careers.” Americans waiting to be told where to go to work hard are struggling in a changing world. Is anyone responsible for this change? Not really, just as no one designed the prior deal. Economist Adam Smith recognized the existence of spontaneous order in society; our institutions are often the product of human action but not design. The Industrial Revolution’s factories, railroads, and other industries needed workers, and economic and political freedom in England and America meant that people could not be forced into this work. A deal where owners and managers designed the workplaces and marketed the products while employees followed directions made all involved better off. The old deal is not being shut down by law, and traditional employment will never disappear entirely. One consequence of the ongoing spontaneous change is no official announcement that the old deal is no longer in effect. This probably worsens disillusionment and pain, as a long-successful way of approaching life just isn’t very successful anymore. President Trump’s proposed tariffs on steel and other imports are a response, I think, to this larger change. We can expect other actions as well, perhaps regulations to slow the introduction of self-driving vehicles to protect jobs. Tariffs and regulations, however, will likely be merely delaying actions. The good news is that automation will allow us to produce the goods and services we now have, plus many more. And many Americans are embracing freelance careers, perhaps because obeying the boss is often frustrating. The future is bright but will be different, and we must recognize the differences. ### Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.
