In “The Creative Powers of a Free Civilization,” Nobel Prize winner F.A. Hayek posits that only in a truly free society can the creative powers of individuals be maximized.
please answer the following question: To what degree are the creative powers of individuals influenced by the structure of the society in which they live? What legal, social, cultural, or educational conditions would be needed to fully realize their creative powers?
A good essay will consider the following questions:
What degree of freedom currently exists in various societies, cultures, and countries around the world?
What features and conditions define a free society?
What is meant by the term “creative”?
Read the following sources before you begin your essay. You are free to use any other resources as background research.
Hayek, F.A. 1958. “The Creative Powers of a Free Civilization,” in Essays in Individuality, edited by Felix Morley.
“Most of what has been said so far applies not only to man’s use of the means for the achievement of his ends but also to these ends themselves. It is one of the essential characteristics of a free society that its goals are open, that new ends of conscious effort can spring up, first with a few individuals or a small minority, to become in time the ends of all or most.
We must recognize that even what we regard as good or beautiful is changeable, if not in any recognizable manner that could entitle us to take any kind of relativist position, yet in the sense that in many ways we do not know what will appear as good or beautiful to another generation; we do not know why we regard this or that as good, or who is right when people differ on whether something is good or not. It is not only in his knowledge, but also in his aims and values, that man is the creature of the process of civilization, and in the last resort it is the significance of these individual wishes for the perpetuation of the group or the species that will determine whether they will persist or change. It is of course a mistake to believe that we can draw conclusions about what our values ought to be, because we realize that they are a product of evolution. But we cannot reasonably doubt that these values are created and altered by the same evolutionary forces that have produced our intelligence. All that we can know is that the ultimate decision about what is accepted as right and wrong will be made not by individual human wisdom but by the disappearance of the groups that have adhered to the ?wrong?Ebeliefs.
It is in the pursuit of man’s aims of the moment that all the devices of civilization have to prove themselves; that the ineffective is discarded and the efficient handed on. But there is more to it than the fact that new ends constantly arise with the satisfaction of old needs and with the appearance of new opportunities. Which individuals, and which groups, succeed and continue to exist depends as much on the goals which they pursue, the values that govern their action, as on the tools and capacities at their command. A group may prosper or be extinguished just as much because of the ethical code that it obeys, or because of the ideals of beauty or well-being that guide it, as because of the degree to which it has learned or not learned to satisfy its material needs. Within any given society particular groups may rise or sink because of the ends they pursue and the standards of conduct which they observe. And the ends of the successful group will tend to become the ends of all members of society.
At most we understand only partially why the values we hold, or the ethical rules we observe, are conducive to the continued existence of our society. Nor, under continuously changing conditions, can we be sure that all the rules that have proved themselves as conducive to that purpose will remain so. Though there is a presumption that any established social standard contributes in some manner to the preservation of a civilization, our only way of knowing this is to ascertain whether it continues to prove itself in competition with other standards tried by other individuals or groups.
The competition, on which the process of selection rests, must be understood in the widest sense of the term. It is as much a competition between organized and unorganized groups as a competition among individuals. To think of the process in contrast to cooperation or organization would be to misconceive its nature. The endeavor to achieve specific results by cooperation and organization is as much a part of competition as are individual efforts, and successful group relations also prove their efficiency in competition between groups organized on different principles. The distinction relevant here is not between individual and group action but between arrangements in which alternative ways based on different views and habits may be tried, and on the other hand, arrangements in which one agency has the exclusive rights and the power to coerce others to keep out of the field. It is only when such exclusive rights are granted, on the presumption of superior knowledge of particular individuals or groups, that the process ceases to be experimental and the beliefs that happen to be prevalent at the moment tend to become a main obstacle to the advancement of knowledge.
It is worth a moment’s reflection as to what would happen if only what was agreed upon to be the best knowledge of society were to be used in any action. If all attempts that seemed wasteful in the light of the now generally accepted knowledge were prohibited and only such questions asked, or such experiments tried, as seemed significant in the light of ruling opinion. Mankind might then well reach a point where its knowledge allowed it adequately to predict the consequences of all conventional actions and where no disappointment or failure would occur. Man would seem to have subjected his surroundings to his reason because nothing of which he could not predict the results would be done. We might conceive of a civilization thus coming to a standstill, not because the possibilities of further growth had been exhausted, but because man had succeeded in so completely subjecting all his actions and his immediate surroundings to his existing state of knowledge that no occasion would arise for new knowledge to appear.
There can be little doubt that man owes some of his greatest successes in the past to the fact that he has not been able to control social life. His continued success may well depend on his deliberately refraining from exercising controls now in his power.”
“The Future and Its Enemies,”by Virginia Postrel
The following brief excerpt from the book, THE FUTURE AND ITS ENEMIES by Virginia Postrel is copyrighted (c) 1998 by Virginia Postrel and reprinted by permission of The Free Press, an imprint of Simon & Schuster, Inc. Virginia Postrel is editor-at-large of Reason magazine, a columnist for Forbes and its companion technology magazine Forbes ASAP, and a contributing editor for the online political magazine IntellectualCapital.com. Her work appears frequently in the Los Angeles times, the Wall Street Journal, Wired, and other major publications. She lives in Los Angeles.
We hate not knowing the future. Soothsayers are as old as history. But the kind of soothsaying that runs on giant computers, that fills the pages of business publications and informs the decisions of legislators and regulators, is different from old-time magic. Rather than tap omniscient forces operating outside time, it claims scientific knowledge of the present, or at least of everything important about the present. Drawing on that information, it then predicts what people will do and how their actions will shape the world. Or it tells them how they must act and assumes it can foresee the results.
Sometimes this soothsaying is limited and relatively harmless, just one more factor in the trials and errors that compete to shape a more pleasing future. In the late 1980s and early 1990s, for instance, many retailers turned to consultants to predict which women’s fashions to stock. Using impeccable demographic data, the consultants homed in on a central fact: Consumers were getting older and fatter. But the inferences they drew-forget youth, novelty, or sex appeal, and go for the basics-could not have been more wrong. What actually sold were slinky slip dresses and curvy, miniskirted business suits. Retailers who followed the reductionist consultants’ advice got stuck with unwanted inventory, and the entire industry slumped.
“Never have so many people been employed in analyzing fashion, and never has fashion business been so dismal,” commented New York Times fashion critic Amy M. Spindler. “As in any design field, fashion sells when something innovative is presented, something no consumer could have anticipated. . . . But most consultants, even if they are sharply tuned to changes in the demographics of the world, know little about fashion’s X-factor, the unknown quantity that makes an item seem hot to a consumer.”
The world is full of X-factors, the unarticulated and unrealized knowledge that can be elicited only by experience and experiment. Informed by younger friends that the latest Washington hot spots were cigar-and-martini bars, an out-of-town visitor figured the young folks must be slumming in fusty old K Street steakhouses. “I was as usual totally wrong. As [a hypothetical 1978] planner would have been totally wrong,” he later told a conference on industrial policy. “Because this was not a steakhouse that had somehow acquired a second clientele. This was built from the ground up for 22 year olds with so much facial jewelry that they would set off airport metal detectors.” Moral of the story: “It’s extraordinarily difficult to tell which products will be the successful ones.”
It is possible to discern patterns and to test that analysis against others’. But the competition-the test-is crucial. Most predictions are wrong, and the more specific the claim, the more likely the error. When in 1983 Forbes confidently ran a list of “names you are not likely ever to see in The Forbes Four Hundred,” the story had a perfectly good theory: that inventors rarely get rich off their creations. But the author got too specific. Third on the list of people unlikely ever “to transcend the $125 million mark in net worth” was none other than Bill Gates, who went on to become the richest man in the world.
In his bet with Paul Ehrlich, by contrast, Julian Simon was able to predict confidently that the prices of five metals would decline from 1980 to 1990. His prediction was based on a dynamic understanding of resource use; his mental model assumed increasing knowledge about alternative sources and applications, feedback from prices, and competitive pressures to do more with less. Simon bet only on the general trend, however, not on specifics. He did not try to say in advance which innovations would lead to the price declines, nor did he project the exact magnitude of the drops. Of those things, he admitted ignorance.
Like the Forbes list, politically imposed stasist plans often get very specific. They admit no X-factors and no learning. They know that high-definition television will take off (and will do so in the form pioneered by the oh-so-scary Japanese) and that cigar-and-martini bars will not. Stasist plans do not consider how people might adjust to new circumstances, and they don’t factor in new inventions.
“Most experts believe that without deep changes in both industry behavior and government policy, U.S. microelectronics will be reduced to permanent, decisive inferiority within ten years,” wrote MIT’s Charles Ferguson in a famous 1988 Harvard Business Review article. He called for a government-directed policy to help U.S. chip companies threatened by foreign competition and denounced the “fragmented, ‘chronically entrepreneurial’ industry” of Silicon Valley. As authorities to back up his prescriptions, he cited a wide number of university researchers and senior personnel of my acquaintance in the U.S. Defense Department, the CIA, the National Security Agency, the National Science Foundation, and most major U.S. semiconductor, computer, and electronic capital equipment producers. My conclusion, after meetings with groups in the U.S. Defense Science Board, the White House Science Council, and others, is that only economists moved by the invisible hand have failed to apprehend the problem.
Ferguson and his mandarin contacts just Couldn’t envision an industry driven by microprocessors, software, and networks rather than memory-chip manufacturing. Instead, they assumed an essentially static world, anticipated disaster, and demanded industrial policy.
Ferguson’s ideas were not adopted by either businesses or government. Yet ten years after he predicted an industry “reduced to per manent, decisive inferiority,” American information technology companies lead the world. Had chip makers followed his advice, clinging to commodity technologies and stifling entrepreneurship in an effort to build larger firms, the industry would have indeed gone down the drain. “Economists moved by the invisible hand,” who understood the dynamic patterns of the industry but did not try to predict its exact evolution, knew more than Ferguson’s “experts”-for the very reason that they recognized the limits of their knowledge.
Technocratic plans assume the very things they try to enforce: that the world is simple and easily controlled, that it changes only in predictable ways, that it can be mastered. They suppose that the planners have all the relevant information and know exactly how the world works. The urban renewal programs of the 1950s and 1960s were neat, logical expressions of a certain understanding of city life, as neat and logical as the fashion consultants’ projections. But the planners recognized neither the bustling vitality that appeals to city dwellers nor the personal space that draws people to the suburbs. They thought that plazas surrounding high-rise apartment buildings-which looked great in architectural drawings-would somehow duplicate the open space of suburban lawns; instead, such projects lacked both the urban convenience of nearby places to mingle and shop and the suburban attractions of privacy and green space. These technocrats scorned the critical information embedded in the lives of both city dwellers and suburbanites: the “tacit knowledge” expressed in relationships and habits and conveyed through webs of economic and social connections.
More recently, the Environmental Protection Agency has evaluated California’s smog-reduction regulations not by measuring actual pollution levels but by cranking computer models. The models neither permit radically new ideas for cutting pollution nor incorporate unexpected changes in the human environment. They cram any new technology or information into the same old framework. By 1996, when California developed a plan to comply with the 1990 Clean Air Act amendments, the state had ample data indicating that a small percentage of “gross polluters” contribute the majority of vehicle pollution-and that the most effective way to spot such cars is through roadside “remote sensing,” analogous to radar guns for catching speeders. Under EPA rules, however, officials could not fully adapt their smog-reduction program to this new information and technology. Instead, they had to create an awkward hybrid that sticks remote sensing onto established programs of periodic smog checks and trip reductions. “The public cares about results-cleaner air,” says Lynn Scarlett, who chaired the California Inspection and Maintenance Review Committee, which was responsible for developing a plan to meet EPA requirements. “EPA cares more about whether folks are complying with permit procedures and technology mandates.”
EPA predictions also take a simplistic view of human behavior. The agency’s rigid models make room for scheduled inspections, but not random smog checks or their deterrent effects. And the models assume that population will grow, never that it will shrink or change in composition. Projections made in the late 1980s thus missed southern California’s post-Cold War economic downturn, which reduced growth rates and traffic; yet those projections remain, feeding regulations. The agency’s predictions presume that both behavior and knowledge are essentially fixed. And they force 17 million motorists to live accordingly.
Predictions go wrong because there are many possible sources of error: environmental shocks, bad or incomplete models, bad or incomplete data, sensitivity to initial conditions, the ever-branching results of action and reaction. Writing of technology, the physicist Freeman Dyson notes that its inherent unpredictability makes centralized decision making hazardous:
Whenever things seem to be moving smoothly along a predictable path, some unexpected twist changes the rules of the game and makes the old predictions irrelevant. . . . A nineteenth-century development program aimed at the mechanical reproduction of music might have produced a superbly engineered music box or Pianola, but it would never have imagined a transistor radio or subsidized the work of Maxwell on the physics of the electromagnetic field which made the transistor radio possible. . . . Yet human legislators act as if the future were predictable. They legislate solutions to technological problems, and they make choices between technological alternatives before the evidence upon which a rational choice might be based is available.
Many important developments take place out of view of the pundits. What business analyst in the 1970s would have looked to rural Arkansas to find the future of retailing? Yet that’s where Wal-Mart emerged. It took Jimmy Carter, a born-again Southern Baptist immersed in Bible Belt culture, to recognize the political potential of evangelical voters-who were there all along. In retrospect, fashion consultants could trace those miniskirted business suits to the characters of Melrose Place. Not so surprising after all. The critical “local knowledge” is out there, but it’s hard to collect.
Unexpected events or patterns often make perfect sense in hindsight. But the very difficulty of predicting the future points up how little we know-or can know-about the present. The present is, after all, the basis of all prediction. Management guru Peter Drucker, among the most perceptive of trend spotters, declares emphatically that “I don’t speculate about the future. It’s not given to mortals to see the future. All one can do is analyze the present, especially those parts that do not fit what everybody knows and takes for granted. Then one can apply to this analysis the lessons of history and come out with a few possible scenarios. . . . Even then there are always surprises.”
Knowledge is at the heart of a dynamic civilization-but so is surprise. A dynamic civilization maximizes the production and use of knowledge by accepting widespread ignorance. At the simplest level, only people who know they do not know everything will be curious enough to find things out. To celebrate the pursuit of knowledge, we must confess our ignorance; both that celebration and that confession are central to dynamic culture. Dynamism gives individuals both the freedom to learn and the incentives to share what they discover. It not only permits but encourages decentralized experiments and competitive trial and error-the infinite series by which new knowledge is created. And, just as important, a dynamic civilization allows its members to gain from the things they themselves do not know but other people do. Its systems and institutions evolve to let people develop, extend, and act on their particular knowledge without asking permission of a higher, but less informed, authority. A dynamic civilization appreciates, protects, and nurtures specialized, dispersed, and often unarticulated knowledge.
Not surprisingly, how we think about knowledge-like how we think about progress-is one of the questions over which dynamists and stasists clash. These competing visions simply do not imagine knowledge in the same way. To dynamists, knowledge is like an ancient, spreading elm tree in full leaf: a broad trunk of shared experience and general facts, splitting into finer and finer limbs, branches, twigs, and leaves. The surface area is enormous, the twigs and leaves often distant from each other. Knowledge is dispersed, shared through a complex system of connections. We benefit from much that we do not ourselves know; the tree of knowledge is too vast. For stasists, by contrast, the tree is a royal palm: one long, spindly trunk topped with a few fronds-a simple, limited structure.
NOTE: Virginia Postrel uses the word “stasists” to contrast with “dynamists.” Stasists are people who seek specific to govern each new situation and keep things under control, whereas dynamists appreciate dispersed, often tacit knowledge. Stasists are focused on a controlled, uniform society that changes only with permission from some central authority. Dynamists want to create an open-ended society where creativity and enterprise, operating under predictable rules, generate progress in unpredictable ways.
“The Virtue of Prosperity,” by Dinesh D?fSouza
Prosperity can be upsetting.
Just ask John Little, chief executive of Portal Software. As ?gthe billionaire next door,?h he was cover boy for the Forbes 400 in 1999. But a guy next to him on a flight, seeing the headline but not connecting the photo to his neighbor, erupted: ?gI?fm so sick of these rich Internet brats! I?fve got years of experience. I work 10 hours a day, and these 25-year-olds make millions overnight while I?fm struggling to feed my family.?h
Little fumes: ?gThe guy undoubtedly thought I got the idea for my company last week, and this week I?fm worth a billion dollars. Actually, I?fve been in this business for 14 years. I?fve worked my butt off, and finally it?fs come together.?h
After piling up more than they ever dreamed of, America?fs top entrepreneurs now face a moral challenge: By what right have they so much more than others? What have they given in return?
The Party of Yeah versus the Party of Nah
If the affluent are agonizing over such questions, so, too, are Americans in general beginning to debate them. On one side are the pioneers of technocapitalism. They have a vision for the world, they are making it happen, and they are being rewarded for it. This group champions new companies and products, welcomes the rapid pace of change, sees ahead a cornucopia of pleasures and possibilities. Call it the Party of Yeah.
Its opponents argue that the New Economy is a fraud and that no upsurge in the Dow or the Nasdaq can compensate for the moral and social havoc being produced by technocapitalism. These critics charge that unfettered markets and runaway technology, far from bringing us closer to the promised land, destroy cherished values. This ideologically diverse group is made up of cultural pessimists, environmentalists, traditionalists, egalitarians, and technophobes. Call it the Party of Nah.
This clash reflects the special concerns of an era of prosperity. Capitalism has won the economic war, but it has not yet won the moral war.
The Party of Yeah is led by people like Steve Jobs, Jeffrey Bezos, and Stephen Case. By and large, it is a young people?fs party, and it dismisses past social policymaking as a failed attempt to allocate scarcity. The scientist, engineer, and entrepreneur, on the other hand, are now able to promise what cybercowboy John Perry Barlow terms the Great Work: to eliminate scarcity, to feed and clothe and heal the world. Dewang Mehta, a leading software entrepreneur in New Delhi, believes that the computer industry will realize Gandhi?fs dream of ?gwiping a tear from every Indian?fs face.?h
A world without scarcity is one in which income or wealth differentials should cease to have much effect. That would lift a great moral weight off many of the new rich. But there?fs more: this group wants to turn workers into free agents and, via the Web, give everyone the same access to information and markets. The Internet still has a fairly limited reach, says physicist Freeman Dyson, but ?gthe new Internet will end the cultural isolation of poor countries and poor people.?h The Party of Yeah promises that cyberspace will bring people together by fostering ?gelectronic neighborhoods?h based not on geography but on shared interests.
And after that? Virtual-reality experiences that cannot be duplicated in the real world. Chips in our brains that will expand our minds. Antiaging drugs and genetic modifications that will make future generations healthier, better looking, smarter, more artistic, perhaps even more caring.
So is all this new wealth justifiable—because it?fs going to save humanity?
Despite all the talk of a ?gdigital divide,?h Internet access today seems about as serious a problem in the United States as ?gtelephone access?h or ?gautomobile access.?h Today a computer doesn?ft cost much more than a TV set, and Internet use costs little to nothing..
On both the political left and the political right there is vocal resistance to technocapitalism. It comes mainly from intellectuals, clergy, naturalists, and people who have found themselves on an economic treadmill while their neighbors have surged ahead. The left-wing critique is in the name of nature and equality. The right-wing critique is in the name of community and morality. These critiques are merging and becoming one. To the Party of Nah, technocapitalism is unleashing a gale of creative destruction that is wrecking the ecosystem, exacerbating inequality, eroding personal privacy, weakening the family, and uprooting communities.
What about the Party of Yeah? It, too, is deeply divided. When Ted Turner announced he was giving a billion dollars to the United Nations, John Stossel of ABC News asked him: Why are you throwing your money down such a rat hole? Why don?ft you invest in your own company, create more jobs, and make people better off? Turner angrily stormed off the set. Could he not bear the thought that his business practices themselves might be socially beneficial?
Turner has been accused of guilt-trip capitalism—but his premises seem to be shared by some of the market?fs strong defenders. Innumerable chief executives speak of ?ggiving back?h to the community. But that implies that you have been taking from the community.
What all this suggests is that the moral divide over affluence and technology doesn?ft just run between ideological camps; it runs through our own hearts. The Party of Yeah and the Party of Nah dwell, in a sense, within us. We want to do well, but we also want to feel morally justified in claiming our rewards. We pursue wealth while wearing faded jeans and buying old Shaker furniture to prove that we have not been consumed by greed and materialism. We respond to political candidates who, somewhat implausibly, vow to produce an economy that ?gleaves no one behind.?h We want to integrate prosperity into something higher and more meaningful. Can we achieve these disparate, sometimes contradictory, goals? Can the division between the two contending sides, which is also a division within our psyche, be healed?
The Rich Get Richer (But So Do We)
Philosophers on the left charge that the winnings of the New Economy are profoundly unjust and have nothing to do with individual merit. Moreover, they say that inequalities have reached intolerable proportions and that the gains of the New Economy have gone almost entirely to a small segment of the population. A broader accusation, launched by many clergy and some political conservatives, is that greed, materialism, and self-indulgence have taken over our economy and our culture and that these vices—or, in Christian terminology, sins—are encouraged by the free market. It is easy to pooh-pooh these charges, but let us try to answer them.
First, inequality. It cannot be denied that the New Economy has contributed to staggering inequalities of wealth. The top 1 percent of the population owns more than one-third of the wealth in the United States. The top 10 percent has two-thirds. The net worth of the 30 richest Americans equals approximately $500 billion. There are 35 million black people in the United States. The annual earnings of this community add up to $450 billion. It?fs not quite fair to compare annual earnings to net worth; even so, it remains a remarkable fact that 30 people in this country have assets greater than the gross annual earnings of black America.
The Party of Nah is convinced that there is no justification for these inequalities. Do the youngsters who start Internet companies really deserve to be centimillionaires? Perhaps, as Amy Dean, business manager for the AFL-CIO in Silicon Valley, recently put it, the new wealth is the result of ?ga bunch of young white guys being in the right place and winning the lottery.?h Eric Schmidt, chief executive of Novell, says that, in the tech world, as everywhere else, networks and social connections determine success. According to Schmidt, it?fs a myth that anyone with a good idea can raise the necessary venture capital: ?gYeah, right—anybody can raise capital for an Internet company if they know the same guys that I do.?h
The traditional mantra about inequality is that the rich are getting richer and the rest are getting poorer. In the past two decades, however, the rich have gotten richer and the rest have also gotten richer, although not at the same pace. According to a study by John Weicher of the Hudson Institute, median household wealth has climbed from $57,000 in 1983 to $72,000 in constant 2000 dollars. Why should people feel aggrieved that the rich are pulling further ahead if they are also moving forward? If you drive a Mercedes and I have to walk, that?fs a radical difference in lifestyle. But is it a big deal if you drive a Mercedes and I drive a Hyundai? If I have a 4-bedroom condo, should I be morally outraged that you have a 12-bedroom house?
More than ever before, today?fs wealth is a product of personal achievement rather than inheritance. More than half of those on the latest Forbes list of the 400 richest Americans made their own fortunes. Thomas Stanley and William Danko, authors of The Millionaire Next Door, estimate that 80 percent of Americans whose net worth exceeds $1 million are ?gordinary people who have accumulated their wealth in one generation.?h So you can?ft argue that most of today?fs affluent got that way by choosing their parents carefully.
A second characteristic of successful people today, especially evident in the high-tech world, is that they come from diverse ethnic backgrounds. At companies like America Online and Microsoft, there are now caucus groups and cafeteria sections not for Asian Indians but specifically for Gujaratis, Bengalis, and Keralites. Each group speaks a different native language and savors a different cuisine. Economist Gary Becker, a Nobel laureate, reports that more than a third of the 1 million people employed in Silicon Valley are foreign born. True, African Americans make up a tiny percentage of senior personnel in the computer and telecommunications industries, but that?fs because blacks earn less than 2 percent of Ph.D.?fs in fields like engineering, physics, mathematics, and computer science.
Social networks, as Eric Schmidt suggests, are a reality in high tech as in other fields. Still, what is striking is how successfully nonwhite immigrant groups have established networks of their own. When Indian-born Sabeer Bhatia first came up with the idea for Hotmail, he was rejected by a series of venture capitalists. As a ?gperson of color,?h he naturally felt he was a victim of discrimination. But, he says, ?gI quickly realized that being foreign born was no barrier, it was only a barrier in my mind.?h
Now Indian entrepreneurs like Bhatia have set up their own ?gcurry network,?h complete with regular deal-generating powwows, an annual conference, a magazine, and a web site. Indian-born venture capitalist Vinod Khosla remarks that in Silicon Valley it?fs almost a case of reverse discrimination: ?gPeople almost assume that if you?fre Indian or Chinese you?fre smarter, and you get the benefit of the doubt.?h
What about the ?gdigital divide?h and the sarcastic comment by civil rights activists that the Internet should be renamed the World White Web? It is certainly true that not everyone uses the Internet equally. Whites and Asian Americans are more likely than blacks and Hispanics to log on, and the affluent are far more wired than the indigent. These differences do exist, but do they reflect a problem of ?gaccess?h? After all, Internet access today seems about as serious a problem in the United States as ?gtelephone access?h or ?gautomobile access.?h Today a computer doesn?ft cost much more than a TV set, and Internet use costs little to nothing. The real digital divide is that some people and some groups are more adept at using the web than others.
The egalitarian critique is a limited one because it merely says that the blessings of capitalism are not being extended to all. A more fundamental criticism challenges the ethical basis of the system itself. This view holds that the very engine of technocapitalism is greed and selfishness.
Several leading tech entrepreneurs have warned that such rapacity, never in short supply, has reached new heights in the New Economy. ?gWhen greed becomes this prevalent,?h remarks Craig McCaw, the telecommunications mogul, ?gsomething bad always happens.?h In a recent article, James Collins, coauthor of the popular business book Built to Last, wagged a finger at all the young entrepreneurs rushing to take their companies public and strike it rich. Collins indignantly asked, what happened to the early New Economy ideal of making better products and lasting companies so that the world would be a better place?
Who has done more to eradicate poverty and suffering in the Third World, Bill Gates or Mother Teresa? To the extent that he has placed the power of information technology at the disposal of millions of people, the obvious answer is Gates.
Some in the Party of Yeah seek to meet this criticism by denying their base motives. Today?fs high-tech entrepreneurs want us to believe that they aren?ft selfishly chasing big bucks, that their motives are creativity and passion. The magazine Red Herring suggests profits are a by-product of a labor of love: ?gMoney comes to those who do it for love.?h
Before we are lulled into sentimentality, let us ask: Aren?ft profits the raison d?fêtre of commercial enterprise? Profits aren?ft merely a barometer of customer service, they are the ultimate rationale of the whole enterprise. True, many Internet companies forgo profits in order to expand their customer base. But they do this only because they expect to harvest vastly greater profits in the future.
Without denying that many entrepreneurs love what they do, it hardly follows that they are doing it ?gfor love.?h If that were truly the case, then they would never ask about their stock options.
Money Is the Root of All Good
One entrepreneur who is candid about his selfish motives is T. J. Rodgers, chief executive of Cypress Semiconductors. Rodgers is an admirer of philosopher Ayn Rand, whose defense of capitalism is summarized in the title of her book The Virtue of Selfishness. ?gI don?ft mean to disagree with anyone?fs religion,?h Rodgers says, ?gbut my own view is that money is the root of all good.?h
Rodgers adds, ?gI keep hearing feed the poor, clothe the hungry, give shelter to those who don?ft have it. The bozos who say this don?ft recognize that capitalism and technology have done more to feed and clothe and shelter and heal people than all the charity and church programs in history. So they preach about it, and we are the ones doing it. They want to rob Peter to pay Paul, but they always forget that Peter is the one that is creating the wealth in the first place.?h
To pose Rodgers?fs point in its most provocative way: Who has done more to eradicate poverty and suffering in the Third World, Bill Gates or Mother Teresa? To the extent that he has placed the power of information technology at the disposal of millions of people, the obvious answer is Gates. It doesn?ft follow that Gates deserves a higher heavenly perch than Mother Teresa. Still, if the moral value of actions were to be judged solely by their consequences, Gates and other tech entrepreneurs have done an awful lot of good, far more good than their detractors in the Party of Nah.
Implicit in Rodgers?fs comments is the insight that capitalism civilizes greed, just as marriage civilizes lust. Greed and lust are human emotions. As such, they cannot be eradicated. And to the degree that greed leads to effort, and lust to pleasure, who would want to eradicate them? At the same time, it is widely recognized that these inclinations can have corrupting and destructive effects. So they have to be regulated or channeled. Capitalism channels greed in such a way that it is placed at the service of the wants of others—even unknown wants. Think about this: before cell phones existed, who even knew that we—rich or not—couldn?ft get by without them?
“Churn, Baby, Churn,” from Inc. Magazine
It’s a messy business, this new economy. And despite what our leaders say, all the turmoil is actually good for us
There are people who actually know how the modern economy works. It’s just that nobody in government wants to talk to them.
Consider the story of Donald Hicks, a professor of political economy at the University of Texas. Contracted by the state to examine the past and future of Texas’s manufacturing base, Hicks pored over 22 years of sales-tax returns–including those of defunct businesses, interred in a mausoleum-like archive–to trace individual companies as they made their way into and out of existence.
His most striking finding: the “half-life” of new businesses had been cut in half since 1970. That is, a group of companies founded in, say, 1985 took only half as long to have its ranks depleted by 50% as a group born in 1970 did. A process of attrition that used to take five years now took less than two.
More surprising still, the Texas city whose businesses had the shortest life expectancy–Austin–had the fastest-growing job base and the highest wages. The counterintuitive lesson: high business-mortality rates are good for economic health.
Actually, that makes sense. In the past two decades, a variety of factors have dramatically increased the velocity of the basic capitalist dynamic. The economies that succeed are those that quickly shift assets to their most productive uses through vigorous economic churning–through business start-ups and the failures that inevitably accompany them. This is what the Austrian economist Joseph Schumpeter famously termed “the perennial gale of creative destruction.”
But what the Texas government really wanted to know was this: what would it take to create 3 million new jobs by the year 2020? Hicks’s report offered this answer: the state’s task was to produce not 3 million new jobs but 15 million new ones, because by 2020, most of today’s companies will have disappeared. Rather than considering jobs a fixed sum to be protected and augmented, he argued, the state should focus on encouraging that economic churning–on continually re-creating the state’s economy. To promote long-term economic stability, paradoxically, the state would have to promote constant instability.
Presented with those messy and unsettling findings, what did the state of Texas do? It decided not to release the report.
The Entrepreneur as Protagonist
This tale captures perfectly a problem that is warping the debate about America’s political economy. As Regina Tracy, executive director of the Research Institute for Small & Emerging Business, in Washington, D.C., explains it: “Legislators get incredibly nervous talking about creative destruction, about the churning process. They don’t want to talk about it.”
We shouldn’t be surprised. First of all, the case for creative destruction can’t be neatly wrapped up in a 15-second sound bite. Second, no politician with even a modicum of regard for his or her political future would stand before voters and advocate economic instability and destruction. And though there is plenty of lobbying pressure to prop up “yesterday’s economy,” Hicks points out, “there is no political constituency for the future, for the firms that aren’t yet born.”
The sad result is that our politicians frame debates and create public policy using a model of an economy that no longer exists. Or as Tracy puts it, a model that “causes us to ask the wrong questions.”
For instance, most communities’ idea of economic development is to lure outside companies through tax abatements and other incentives. Yet from the standpoint of national prosperity, that’s silly. It is, of course, a zero-sum affair: one community’s gain is another’s loss. Rather than fostering innovation and new wealth–by, say, establishing local business incubators, which research has shown are more effective on a cost-per-job-generated basis than business-attraction schemes are–communities grapple over existing wealth as if it were a fixed quantity.
The problem is compounded by the fact that the reigning economic orthodoxy–a neoclassical model known as general equilibrium theory–all but denies the existence of creative destruction. Constructed in the late 19th century and since embellished by generations of economists, equilibrium theory concerns itself mainly with how existing resources are optimized: how supply-and-demand curves determine input, output, prices, and so on. As its name suggests, the theory takes equilibrium to be the normal state of an economy, assuming that market disequilibriums are quickly eliminated through price adjustments. Its mathematically elegant universe cannot cope with entrepreneurs, relegating them to the catchall domain of “external forces,” along with war and weather.
Early in this century, Joseph Schumpeter broke radically with his profession by suggesting that general equilibrium theory missed the point. It ignored what he considered capitalism’s single most important aspect: innovation, which drives economic changes. “The essential point to grasp,” he wrote, with a hint of contempt for his peers and their misleading mechanical metaphors, “is that in dealing with capitalism, we are dealing with an evolutionary process.”
Disequilibriums were not passing anomalies, Schumpeter asserted, but rather the very crux of the capitalist process: an entrepreneurial business–Microsoft, Nucor, Southwest Airlines–enters a market with a technological or organizational innovation, destroys the oligopolistic equilibrium, and siphons off wealth and jobs from the hegemonic giants. In that model of constant disequilibrium, the entrepreneur is the central economic protagonist, the fount of economic progress and quality-of-life improvements.
Two Models, One Economy
Schumpeter’s perspective did not fare well, mostly because it did not lend itself easily to mathematical expression. Strangely, he was remembered more for a fantastically incorrect prediction–that huge corporations would come to dominate the U.S. economy–than for his central thesis of creative destruction. Today a small tribe of maverick economists–particularly Paul Romer of Stanford University–has resuscitated Schumpeter’s vision, contending that it offers an altogether more accurate description of how today’s economy actually works. Yet, as Richard Nelson of Columbia University remarks, “there is scarcely a crowd of us.”
Ongoing resistance to a more dynamic conception of capitalism has unfortunate policy consequences. For example, there’s great self-congratulation in America about “our efficient financial markets,” says Bruce Kirchhoff of the New Jersey Institute of Technology. But “from the small company’s point of view,” he says, “they’re anything but efficient. The transaction costs are huge. The deal flow is horrible.” If there were a more thorough comprehension of the macroeconomics of small enterprises, Kirchhoff argues, the government wouldn’t have waited so long to allow a public market for entrepreneurs looking for equity capital, which the Securities and Exchange Commission only recently OK’d.
Or take the government’s data-gathering efforts. As Kirchhoff points out, they’re designed under the faulty notion that an insignificant number of small companies grow to become large ones. The resulting methodology makes it difficult to determine such basics as how many new jobs entrepreneurial companies contribute to the economy. That, in turn, leaves policymakers in a data vacuum when fashioning economic policy. “What we’re trying to do is sell the American political system on a new view of what the economy is, and to sell it we need good data,” says Kirchhoff. “But there is no data.”
In the absence of firm numbers and understanding, the American mind falls prey to the crooning of narrative economists. These are commentators who, instead of arriving at informed judgments through assiduous empirical analysis, construct story-line interpretations largely out of anecdote and aphorism. Case in point: the New York Times noticed that a lot of people were being downsized from corporations and reacted by running an interminable, near-hysterical series in March 1996 informing us that we’re living through an economic cataclysm of millennial proportions.
That story line, the sensible among us recognized, was simply not true. The data told us so: the combined rate of unemployment and inflation–a pretty good indicator of citizens’ economic well-being–was at a 30-year low. Job creation was at a historic high. Presented with that stubborn evidence, the narrative economists came up with a new fable that seemed to trump it, the oft-repeated joke that the economy produced record numbers of new jobs last year, “and I’ve got three of them.” In time, however, that myth was punctured, too: the president’s Council of Economic Advisers reported that the new jobs being created were relatively high-paying and overwhelmingly full-time.
Why the Times series, then? It was a classic instance of viewing a current phenomenon through an outmoded lens. Through the lens of general equilibrium theory, the mass dislocation of thousands of workers would, indeed, signal that something was dreadfully awry. Through a Schumpeterian lens, on the other hand, it was a sign of something else–something that could best be described as economic speed. In the so-called new economy, companies and even entire industries can swiftly rise and fall as resources are transferred to more fruitful sectors (as, in a healthy economy, they should be). What’s confusing is that a phenomenon that has long been solely associated with economic downturns–massive layoffs–has now become a permanent fixture in the economy.
That is not to suggest that layoffs don’t inflict a severe personal toll; certainly they are painful, depressing, even shattering to the people at their receiving end. (And depending on your view of government, there may be a role for the state to play in easing these transitions through job retraining and other programs.) But the story of thousands losing their jobs at a no-longer-competitive industry giant cannot rightly be generalized into an allegory for the state of the U.S. economy. Is this new economy more uncertain and perhaps a little crueler than its predecessor? Of course. But is it less prosperous? Does it offer fewer opportunities? Quite the opposite.
When it comes to the two components of creative destruction, though, the destruction part generally makes for the more dramatic, compelling narrative. The creation part can fall a tad short on story value (“Silicon Valley Start-up Hires 50!”). “Both economists and popular writers have once more run away with some fragments of reality they happened to grasp,” Schumpeter complained a half century ago. He’d undoubtedly raise the same objection today.
The Fallout from Getting It Wrong
When these spurious interpretations march victorious, we end up with loony legislative prescriptions. In this past election year, they emanated from politicians of all stripes. Republican presidential candidate Pat Buchanan, blaming foreign competition for all America’s woes, demanded protection for dying industries–a surefire recipe for stunting domestic entrepreneurship and securing economic mediocrity. Then labor secretary Robert Reich (narrative economist extraordinaire) demanded tax breaks for companies that refrain from layoffs–another wonderful means of distorting the economy by attacking a symptom rather than the root problem.
Popular debate is largely a contest of interpretations, and without an informed explanation of what’s actually driving our economy, these muddled, misconceived story lines advance unchecked.
One still might ask, So what? Well, without vibrant domestic entrepreneurship to constantly disrupt and re-create America’s markets, the source of such disruption is more likely to be foreign companies. And when those foreign companies destabilize one of America’s placid oligopolies, it’s the foreigners who benefit from the creation, and America that absorbs the destruction.
We need only look across the Atlantic to observe the perils of suppressing the economic churning. The causes of Western Europe’s abiding double-digit rates of unemployment are manifold, but at their root is the European governments’ near-complete obliviousness to the wealth-creation process. Public dialogue tends to focus on how to apportion output equitably rather than on finding ways to let enterprises create more of it. France may lower its retirement age to 55, and several European countries are toying with a mandated four-day workweek–all with the notion of getting the “starters,” as it were, out of the game to give the young unemployed some playing time. Those admittedly well-intentioned endeavors would require a massive welfare state, whose taxes and regulations would end up dampening entrepreneurial activity and, ironically, worsening unemployment.
Europe’s technocrats might well benefit from a conversation with Donald Hicks. “These data tell us that the economy is in continuing motion, like a children’s top,” says Hicks of his study. “As long as it’s in motion, that’s when it’s healthy. As soon as you try to protect what you have, it will fall over.”
Our understanding of the economy urgently needs to catch up to the reality of what it has become. Only then will we start to hear more sensible interpretations from America’s politicians, economists, and journalists. Until then, there may be a lot of happy talk–empty incantations about fostering entrepreneurship–but precious little concrete progress.
A year ago Peter Drucker warned in these pages that America’s sense of entrepreneurial superiority was “lulling us into a dangerous complacency–not unlike our complacency about management in the early 1970s.” If we don’t find ways to better comprehend and stoke our entrepreneurial economy, his ominous prophecy may catch up to us faster than we think.