In recent years there have been enormous changes in our technology, our economy, and our society. But has there been progress?
From most economists the first reaction to this question is: Of course there must have been progress! After all, the growth of new technologies expands opportunity sets, what we can do, the amount of output per unit input. We can choose either to have more output, more goods and services, or to work less. However we make the choice, surely we are better off.
But what, then, about the sweeping changes we associate with the phenomenon of globalization? For several years I have been actively involved in debates around the world about the costs and benefits of this phenomenon. As a result of globalization, the countries of the world are more closely integrated. Goods and services move more freely from one country to another. This is the result of the lowering of transportation and communication costs through changes in technology, and of the elimination or reduction of many man-made barriers such as tariffs. The countries that have been most successful at both increasing incomes and reducing poverty – the countries of East Asia – have grown largely because of globalization. They took advantage of global markets for their goods; they recognized that what separates developed from less developed countries is a disparity not only in resources but also in knowledge; they tapped into the pool of global knowledge to close that gap; and most even opened themselves up to the flow of international capital.
But in the countries that have been less successful, globalization is often viewed with suspicion. As I have argued elsewhere, there is a great deal of validity to the complaints of those who are discontent. In much of the world, there has been in recent years a slowing of growth, an increase in poverty, a degradation of the environment, and a deterioration of national cultures and of a sense of cultural identity. Globalization proves that change does not invariably produce progress.
In America we have also seen change, and seemingly at an ever faster pace – but here, too, it is not clear if most Americans are better off. Recent numbers suggest that productivity growth is increasing at the impressive speed of over 4 percent per annum. Americans who work are working longer hours, while more and more Americans are not working: some are openly unemployed; some are so discouraged by the lack of jobs that they have stopped looking (and therefore are no longer included in the unemployment statistics); and some have even applied for, and have begun to receive, disability payments that they would not have sought had there been a job available. Recent decades have seen a concomitant change in values. Forty years ago, the best graduating students sought jobs in which they could work to ensure the civil rights of all Americans, to fight the war on poverty both within the United States and abroad, or to pursue the advance of knowledge; in the 1990s, the best students wanted jobs on Wall Street or with the big law firms. No doubt this shift was brought about in large part by the disproportionate salaries of that decade; these seemed to say, in effect, how much more society valued the work of corporate executives over that of the researchers whose high-tech, biotech, and Internet innovations helped fuel the boom.
Many are concerned, moreover, by the seeming erosion of moral values, exhibited so strikingly in the corporate scandals that rocked the country in the last few years, from Enron to Arthur Andersen, from WorldCom to the New York Stock Exchange – scandals that involved virtually all our major accounting firms, most of our major banks, many of our mutual funds, and a large proportion of our major corporations.
Of course, every society has its rotten apples.1 But when such apples are so pervasive, one has to look for systemic problems. This seeming erosion of moral values is just one change (the increasing bleakness of the suburban landscape in which so many Americans live is another) that does not seem to indicate progress.
How can this happen? How can improvements in technology, which seemingly increase opportunities, and therefore should also increase societal well-being, so often have adverse consequences, bringing about change that is not progress? In the way that I have posed the question, I have implicitly defined what I mean as progress: an improvement in well-being, or at least in the perception of well-being. But that begs part of the question: whose well-being, and in whose perception?
An economy is a complicated system. The price of steel, for instance, depends on wages, interest rates, and the price of iron ore, coke, and limestone. Each of these in turn depends on the prices of other goods and services, in one vast, complicated, and interrelated system. The marvel of the market is that, somehow, it has solved this system of simultaneous equations – solved it before there were any computers that could even approach a problem of such mathematical complexity.
A disturbance to any one part of the system causes ripples throughout it. While improvements in technology improve opportunity sets and in principle could make everyone better off, in practice they often do not. A change in technology that enables a machine to replace an unskilled worker reduces the demand for unskilled workers, thereby lowering their wages and increasing income inequality. Poverty may also increase. Of course, the gains of those who are better off may be greater than the losses of those who are worse off; if so, the government may tax the new gains and redistribute the proceeds to those who lose, in such a way as to make everyone better off. Making everyone better off is what I mean by progress.
But ideology and interests may preclude that. Conservative philosophers will say that it is the right of each individual to keep the produce of his own efforts. But this is a misleading argument, because the notion of individual labor and effort is not well defined. The tools and technology that an individual uses, for instance, are probably not the result of his own labor. They may well be the result instead of public expenditures, of the kind of government investments in research and technology that created the Internet. And, in the first place, government-financed advances in biomedical research may have resulted in the individual even being alive and able to produce anything at all.
Interests buttress ideologies. While some conservatives may resort to philosophical arguments for why there should not be redistribution, those at the top of the income distribution – who have seen their incomes rise much in recent years – have a self-interest in arguing against progressivity. They are unlikely to approach the question from any of the perspectives from which the issue of social justice has been posed – such as that of Rawls, who asks, in effect, what would be a fair tax system, were we to have to decide such a question from behind a veil of ignorance, before we knew whether we were to end up rich or poor, skilled or unskilled? But, of course, people know how the dice has been rolled, so they argue for what is right from the perspective of their current advantage.
Economists have traditionally been loath to talk about morals. Indeed, traditional economists have tried to argue that individuals pursuing their self-interest necessarily advance the interests of society. This is Adam Smith’s fundamental insight, summed up in his famous analogy of the invisible hand: Markets lead individuals, in the pursuit of their own self-interest, as if by an invisible hand, to the pursuit of the general interest. Selfishness is elevated to a moral virtue.
Much of the research of the two centuries following Smith’s original insight has been devoted to understanding the sense in which, and the conditions under which, he was right. His insight grew into, among other things, the idea that the pursuit of self-interested profit-maximizing activity leads to an economic efficiency in which no one can be made better off without making someone else better off. (This concept is called Paretian efficiency, after the great Italian economist Vilfredo Pareto.) It took a long time before the assumptions underlying the theory – perfect competition, perfect markets, perfect information, etc. – were fully understood.
By focusing on the consequences of imperfect information, my own research (with Bruce Greenwald of Columbia University) has challenged the Smithian conclusion.2 We have showed that when information is imperfect, and especially when there are asymmetries of information (that is, different individuals knowing different things), then the economy is essentially never Pareto efficient. Sometimes, in other words, the invisible hand is not visible simply because it is simply not there. Markets do not lead to efficient outcomes, let alone outcomes that comport with social justice. As a result, there is often good reason for government intervention to improve the efficiency of the market.3
Just as the Great Depression should have made it evident that the market often does not work as well as its advocates claim, our recent Roaring Nineties should have made it self-evident that the pursuit of self-interest does not necessarily lead to overall economic efficiency. The executives of Enron, Arthur Andersen, WorldCom, etc. were rewarded with stock options, and they did everything they could to pump up the price of their shares and maximize their own returns; and many of them managed to sell while the prices remained pumped up. But those who were not privy to this kind of inside information held on to their shares, and when the stock prices collapsed, their wealth was wiped out. At Enron, workers lost not only their jobs but their pensions. It is hard to see how the pursuit of self-interest – the corporate greed that seemed so unbridled – advanced the general interest.
Advances in the economics of information (especially in that branch that deals with the problem that is, interestingly, referred to as ‘moral hazard’) help explain the seeming contradiction. Problems of information mean that decisions inevitably have to be delegated. The shareholders have to delegate responsibility for making decisions, but their lack of information makes it virtually impossible for them to ensure that the managers to whom they have entrusted their wealth and the care of the company will act in their best interests. The manager has a fiduciary responsibility. He is supposed to act on behalf of others. It is his moral obligation. But standard economic theory says that he should act in his own interests. There is, accordingly, a conflict of interest.
In the 1990s, as I have argued elsewhere, such conflicts became rampant. Accounting firms that made more money in providing consulting services than in providing good accounts no longer took as seriously their responsibility to provide accurate accounts. Analysts made more money by touting stocks they knew were far overvalued than by providing accurate information to their unwary customers who depended on them.
Consciences may be salved by the doctrine that the pursuit of self-interest will in fact make everyone better off. But the pursuit of self-interest does not in general lead to economic well-being, and societies in which there are high levels of trust, loyalty, and honesty actually perform better economically than those in which these virtues are absent. Economists are just beginning to discover how non-economic values, or ‘good norms,’ actually enhance economic performance.
But some economic changes may corrode these values, for several reasons. We have already drawn attention to two: Such changes may produce new conflicts of interest and new contexts in which the pursuit of self-interest clashes with societal well-being. When people see others benefiting from such conditions, a new norm of greed emerges. CEOS defend their rapacious salaries by referring to what others are getting; some even argue that such salaries are required to provide them the appropriate incentives for making ‘the hard decisions.’
There is a third way in which economic change may undermine norms, particularly in developing countries. To be maintained, norms have to be enforced; there have to be consequences for violating them. Greater mobility typically weakens social mechanisms for the enforcement of norms. Even when there is not greater mobility, greater societal change and uncertainty results in putting less weight on the future, more weight on the short-run benefits from violating a norm than on the long-run costs. In many Western societies this shift, with its increased emphasis on the individual, has undermined many social norms, along with the sense of community.
Changes in technology, in laws, and in norms may all exacerbate conflicts of interest, and, in doing so, may actually impair the overall efficiency of the economy. The notion that change is necessarily welfare enhancing is typically supported by the same simplistic notions, sometimes referred to as market fundamentalism, that assert that markets necessarily lead to efficient outcomes. If the economy is always efficient, then any change that increases the output per unit input must enhance welfare. But if the economy is not necessarily efficient, then there can be changes that exacerbate the inefficiencies. For instance, the presence of competition is one of the requirements for market efficiency; if changes in technology result in one firm’s dominating the market, competition is reduced, and with it, welfare.
More generally, there is no theorem that ensures the efficiency of the economy in the production of innovations. The theorems concerning the efficiency of the economy are all predicated on the assumption that there is no change in technology, or at least no change in technology that is the result of deliberate actions on the part of firms or individuals. In short, standard economic theory is of little relevance in discussions about the efficiency of markets in the production of knowledge. This itself should come as no surprise, for knowledge can be viewed as a special form of information, and the general result referred to earlier about the lack of efficiency of markets with imperfect information extends to this case.
To take another example, there have been notable innovations in financial markets. These have some important advantages. For instance, they enable risks to be shifted from those less able to bear them to those more able to do so. But some financial innovations have made it more difficult to monitor what a firm and its managers are doing, thus worsening the information problem. Many of these innovations were the result of a corporate desire to minimize tax burdens; companies did not want to bear their fair share, so they devised ways of hiding, legally, income from the tax authorities. One of the big intellectual breakthroughs of the 1990s was the realization that these same techniques could be used to provide distorted information to investors; costs could be hidden, and revenues increased. With reported profits thereby enhanced, share prices also increased. But because share prices were based on distorted information, resources were misallocated. And when the bubble to which this misinformation contributed broke, the resulting downturn was greater than it otherwise would have been.
Curiously, stock options, which underlay many of these problems, were at one time viewed as an innovation; they were heralded as providing better incentives for managers to align their interests with those of the shareholders. This argument was more than a little disingenuous: in fact, the typical stock-option package, especially as it was put into practice, did not provide better incentives. While pay went up when stock prices went up, much of the increase in the stock price had nothing to do with the managers’ performance; it just reflected overall movements in the market. It would have been better to base pay on relative performance. Moreover, when, as in 2000 and 2001, share prices fell, management pay did not fall. It simply took on other forms. This is another example of an innovation that was not, in any real sense, progress.
Now consider some examples of putative reforms. Especially in the area of economic policy, a combination of misguided economic analysis, ideology, and special interests often results in reforms that are not, in fact, welfare enhancing – even though they are billed as progress. For instance, in Mexico tax revenues as a share of GDP are so small that the public sector cannot perform many of its essential functions; there is underinvestment in science and technology, education, health, and infrastructure. Among the reforms the Fox government has advocated are tax changes that would increase revenues – but whether society as a whole would benefit depends in part on how the tax revenues are increased. Conservatives have long advocated the VAT (a uniform tax, common in Europe, that is levied at each stage of production), but within the Clinton administration it was summarily dismissed because it is not a progressive tax, a matter of particular concern in a country like Mexico with such a high level of inequality. There were alternative proposals for raising taxes – such as on the profits of the oligopolies and monopolies – that would have been more efficient and equitable.
Elsewhere, policies sold as ‘reform’ – opening up markets to destabilizing speculative short-term capital flows – have exposed countries to huge risks. The East Asian crisis of 1997, the global financial crisis of 1998, the Latin American crises of recent years – all are at least partly attributable to these short-term flows. Just as there is no general theorem assuring us that changes in technology produced by the economy are welfare enhancing, so too there is no general theorem assuring us that the policy reforms that emerge out of the political process – whether at the national or international level – are welfare enhancing. There are, in fact, numerous analyses that suggest quite the opposite.
In economics, the dominant strand of thinking has evolved out of physics. And so economies are analyzed in terms of equilibrium. The consequence of change is to move an economy from one equilibrium to another. Much of what I have said so far can be summarized as follows: Once we recognize that the equilibrium that naturally emerges in an economy may not be efficient, then a change that moves us from one equilibrium to a new equilibrium may not be welfare enhancing.
Another strand of thought in economics owes its origins to a misunderstanding of evolutionary biology. Darwin’s notion of natural selection was not teleological, but some of those who extended Darwinian ideas to the social context argued as if it were. If only the fittest survived, then society, reasoned such social Darwinists, must also be increasingly fit. This misunderstanding of Darwin became central to the Spencerian doctrines of social Darwinism. Darwin himself was far more subtle. He realized that one could not define ‘fit’ in isolation of the elements of the ecological system; that different species occupy different niches; that there are, in effect, multiple equilibria. He realized that the species that survive on one of the Galapagos Islands are not necessarily better or worse in any sense than those that survive on other islands.4
Indeed, there is again no theorem that assures us that evolutionary processes are, in any sense, welfare enhancing. They may, in fact, be highly myopic. A species that might do well in the long run may not borrow against its future prosperity, and hence may be edged out in the competition for survival by a species that is better suited for the environment of the moment.5
Precisely this kind of myopia was evidenced in the competitive struggles of the 1990s. Those investment banks whose analysts provided distorted information to their customers did best. Repeatedly, the investment banks explained that they had no choice but to engage in such tactics if they were to survive. While the most egregious corporations and accountants – the Enrons, Arthur Andersens, Tycos, and WorldComs – had their comeuppances, others survived, even prospered. And many continue to defend their practices and tactics, opposing fair disclosure of information and accounting procedures that would allow ordinary shareholders to ascertain both the levels of executive compensation and the extent of the dilution of share value through stock options.
The connection between technology and the evolution of society has long been recognized. The innovations that led to the assembly line increased productivity, but almost surely reduced individual autonomy. The movement from an agrarian, rural economy to an urban, industrial economy caused enormous societal change. While this Great Transformation is often viewed as progress, it did not leave everyone better off;6 so too with the transformations that the New Economy and globalization are bringing about in the societies of the advanced industrial countries and, even more so, of the developing world. While some of these changes open up the possibility of greater individual autonomy, others simultaneously presage a weakening of the sense of community. Even the community of the workplace may be weakened.
Still, I do not believe in either economic or technological determinism. The adverse consequences of some of the changes that I have noted are not inevitable. We have followed one evolutionary path; there are others. Much of the political and social struggle going on today is an attempt to change that path. Those in positions of political power in fact play an important role in shaping the evolution both of society and technology – for instance, by creating within the tax system rewards and incentives for certain business practices.
At the global level, America’s status as the sole superpower has allowed it to stymie progress to greater democracy within the international arena. Globalization has entailed the closer economic integration of the countries of the world, and with that closer integration there is a need for more collective action, as global public goods and externalities have taken on increasing importance. But political globalization has not kept pace with economic globalization. Rather than engaging in democratic processes of decision making, America has repeatedly attempted to impose its views on the rest of the world unilaterally.
In this essay, I have challenged the thesis that improvements in, say, technology necessarily result in an enhancement of well-being. Increases in income can enrich individual lives. They can enable individuals access to more knowledge. They can reduce the corrosive anxieties associated with insecurities about well-being – one of the problems repeatedly noted in surveys attempting to ascertain the dimensions of poverty. In doing all this, improvements in technology can help free individuals from the bonds of materialism.
But unfortunately, all that goes under the name of progress does not truly represent progress, even in the narrow economic sense of the term. I have emphasized that there are innovations, changes in technology, that, while they represent increases in efficiency, lower economic well-being, at least for a significant fraction of the population.
In the end, every change ought to be evaluated in terms of its consequences. Neither economic theory nor historical experience assures us that the changes that get adopted during the natural evolution of society and of the economy necessarily constitute progress. Moreover, neither political theory nor historical experience can assure us that attempts to redirect development will necessarily guarantee better outcomes. A recognition of this is, in my mind, itself progress, and lays the foundation for attempts to structure economic and political processes in ways that make it more likely that the changes we face will in fact constitute meaningful progress.
ENDNOTES