Tuesday, December 30, 2014

Secular Religions: Marxism and Free-Market Capitalism

Religion provides comforts to believers at a number of levels.  It provides a feeling of belonging to something bigger than themselves; it makes available a framework in which all things can be explained; decisions which otherwise might be difficult to make are predetermined.  However, as history has so often illustrated, religious belief can also provide justification for discrimination and murder.

Tony Judt provides a discussion of the manner in which intellectuals can be caught up in secular belief systems that provide the same comforts and the same justifications for discrimination and murder as organized religion.  What follows is based on the chapter Captive Minds in his book TheMemory Chalet.

The chapter title is taken from the book of the same name by the Polish writer Czeslaw Milosz.

“….in 1951 he defected to the West and two years later he published his most influential work, The Captive Mind.  Never out of print, it is by far the most insightful and enduring account of the attraction of intellectuals to Stalinism and, more generally, of the appeal of authority and authoritarianism to the intelligentsia.”

“Milosz studies four of his contemporaries and the self-delusions to which they fall prey on their journey from autonomy to obedience, emphasizing what he calls the intellectuals’ need for ‘a feeling of belonging’.”

To make reasonable to the modern reader the notion of Marxism as a secular religion, consider this description from Wikipedia:

“According to Marxist analysis, class conflict within capitalism arises due to intensifying contradictions between highly productive mechanized and socialized production performed by the proletariat, and private ownership and private appropriation of the surplus product in the form of surplus value (profit) by a small minority of private owners called the bourgeoisie. As the contradiction becomes apparent to the proletariat, social unrest between the two antagonistic classes intensifies, culminating in a social revolution. The eventual long-term outcome of this revolution would be the establishment of socialism – a socioeconomic system based on cooperative ownership of the means of production, distribution based on one's contribution, and production organized directly for use. Karl Marx hypothesized that, as the productive forces and technology continued to advance, socialism would eventually give way to a communist stage of social development. Communism would be a classless, stateless, humane society erected on common ownership and the principle of ‘From each according to his ability, to each according to his needs’.”

When this sort of speculation is combined with the concept of Darwinian evolution, mere speculation can take on the appearance of a natural history of human development to those inclined to believe.  Similar to religions based on a deity, it provides an end sufficiently desirable that just about any means to that end can be justified.  In one case God is on our side; in the other, History is on our side.

In the postwar years when Milosz published his book, this capture of intellectuals by political and social theories was familiar to everyone.  It had occurred with both communism and fascism.  Judt began teaching Milosz to his students in the 1970s and watched how the appreciation of his students varied over the next few decades.

“….when I first taught the book in the 1970s, I spent most of my time explaining to would-be radical students why a ‘captive mind’ was not a good thing.  Thirty years on, my young audience is simply mystified: why would someone sell his soul to any idea, much less a repressive one?  By the turn of the twenty-first century, few of my North American students had ever met a Marxist.  A self-abnegating commitment to a secular faith was beyond their imaginative reach.  When I started out my challenge was to explain why people became disillusioned with Marxism; today, the insuperable hurdle one faces is explaining the illusion itself.”

Judt suggests that his students—and the rest of us as well—are captured by a secular faith without even realizing it.  He compares the status of free-market economics today with that of Marxism in the postwar era.

“Our contemporary faith in “the market” rigorously tracks its radical nineteenth-century doppelgänger—the unquestioning belief in necessity, progress and History.  Just as the hapless British Labour chancellor in 1929-1931, Philip Snowden, threw up his hands in the face of the Depression and declared that there was no point opposing the ineluctable laws of capitalism, so Europe’s leaders today scuttle into budgetary austerity to appease ‘the markets’.”

“But ‘the market’ like ‘dialectical materialism’—is just an abstraction: at once ultra rational (its argument trumps all) and the acme of unreason (it is not open to question).  It has its true believers—mediocre thinkers by contrast with the founding fathers, but influential withal; its fellow travelers—who might privately doubt the claims of the dogma but see no alternative to preaching it; and its victims, many of whom in the US especially….proudly proclaim the virtues of a doctrine whose benefits they will never see.”

As with any religion, the believer is relieved of the need to justify the beliefs, and an end can be used to justify the means to attain that end.

“Above all, the thrall in which an ideology holds a people is best measured by their collective inability to imagine alternatives.  We know perfectly well that untrammeled faith in unregulated markets kills: the rigid application of what was until recently the ‘Washington consensus’ in vulnerable developing countries—with its emphasis on tight fiscal policy, privatization, low tariffs, and deregulation—has destroyed millions of livelihoods.  Meanwhile, the stringent ‘commercial terms’ on which vital pharmaceuticals are made available has drastically reduced life expectancy in many places.  But in Margaret Thatcher’s deathless phrase, ‘there is no alternative’.”

It is difficult to appreciate from today’s perspective how powerful Russia and communism appeared after World War II. 

“….it was because History afforded no apparent alternative to a Communist future that so many of Stalin’s foreign admirers were swept into intellectual captivity.  But when Milosz published The Captive Mind, Western intellectuals were still debating among genuinely competitive social models—whether social democratic, social market, or regulated market variants of liberal capitalism.  Today, despite the odd Keynesian protest from below the salt, a consensus reigns.”

Judt did not go there, but he could have continued the religious analogy by comparing university economics departments to cloistered monasteries where swearing belief in dogma is required for entry, and a lifetime is spent surrounded by people of identical beliefs.  Once one accepts the dogma, to reconsider its validity is grounds for expulsion.

“One hundred years after his birth, fifty-seven years after the publication of his seminal essay, Milosz’s indictment of the servile intellectual rings truer than ever: ‘his chief characteristic is his fear of thinking for himself’.”

The essays in The Memory Chalet were produced by Judt while he was dying of ALS (Lou Gehrig’s disease).  This malady gradually eliminates muscle control. He found himself at a stage where he was a quadriplegic.  Fortunately, his mind remained clear and he obtained some satisfaction in spending the long nights of solitude reconsidering his life, his experiences, and his acquired knowledge.  He said he discovered himself arranging these reminiscences into topics and finally into what might be called essays which he could dictate to an assistant during the day.  This is a remarkable feat.  The professor could not stop trying to educate his students…,his readers.


Monday, December 15, 2014

The Creation of the Middle Class

The term “middle class” is ill defined.  People on the top end of any wealth scale would prefer to be thought of as “upper middle class” when tax rates are apportioned.  Those on the lower end are even more determined to be included; the alternative is to be considered “lower class.”  For statisticians and economists, a convenient formula is to define the top 10% as the upper class, the next 40% as the middle class, and the final 50% as the lower class.  Thomas Piketty uses this classification to provide an interesting discussion of how wealth has evolved over time in his book Capital in the Twenty-First Century.

Wealth has been chosen as the quantity to track rather than income or some other attribute, because some degree of wealth is required in order to invest in one’s future in an attempt to better oneself.  Wealth is what you accumulate beyond what must be spent to meet daily needs.  It can include savings, real estate, and financial investments.  It is difficult to view a person as middle class if they are not able to reach a stage where they can save for the future.  Note that Piketty defines capital and wealth as equivalent quantities that include anything that has a monetary value.

Piketty provides his figures and tables here.  Consider this table that examines the wealth distribution in various regions and at various times.



Note the last column titled “Very high Inequality.”  This represents the conditions in Europe around 1910, a period of high prosperity just prior to World War I.  The top 10% possessed 90% of the total wealth, with 50% going to the top 1%.  What may be somewhat surprising is that what are referring to as the middle and lower classes each possessed 5% of the wealth.  This led Piketty to conclude that there was effectively no middle class prior to 1910 in Europe.  The situation was similar but a bit less extreme in the US with the upper class holding about 81% of the wealth.

Columns three and four compare the wealth distributions representative of Europe and the US at the current time (evaluated in 2010).  The share possessed by the middle class has now risen to 35% in Europe and 25% in the US.  The amount located in the lower class has remained constant over time at 5%.  What these numbers suggest is that a portion of the wealth that had resided in the upper class has been transferred to the middle class.  Or, more precisely, a middle class was created that could be distinguished from the lower class.  If one assumes that the US had 5% residing in the lower class in 1910, then the middle class would have had about 14% of the wealth—a slightly more egalitarian distribution than that of Europe.  However, this also indicates that the US middle class gained much less than that of Europe over the last century.

The figure below plots the share of wealth held by the top 10% over time for Europe and the US.



The wealth of the top 10% in both regions peaks in 1910 and falls to minimum in Europe around 1980, while the US share falls less far and bottoms out somewhere in the period 1950-1970.  After 1910 came a continuous series of economic and social shocks: World War I, followed by the Great Depression, followed by World War II, and finally the postwar phase of rebuilding and otherwise responding to this sequence of events.  World War I had little physical effect on the US, but it did require a great increase in spending and an associated rise in the level of taxation.  With Europe experiencing large numbers of fatalities and disabled veterans, the European countries had to tax heavily to cover war costs and to begin assembling what are now referred to as “welfare states.”  The interwar years were difficult in Europe with debts to be paid, unstable economic conditions, and tumultuous political developments. The depression years of the 1930s were more consequential for the US as it needed to tax and spend heavily to support the needs of its population and begin its version of a welfare state.  World War II was far more catastrophic for Europe than the earlier war, causing damage and social disruption on a scale that is barely imaginable to later generations.

Tony Judt provides insight into the effect of recent history on European thinking in his book Postwar.

“The 1960s saw the apogee of the European state.  The relation of the citizen to the state in Western Europe in the course of the previous century had been a shifting compromise between military needs and political claims; the modern rights of newly enfranchised citizens offset by older obligations to defend the realm.  But since 1945 that relationship had become increasingly to be characterized by a dense tissue of social benefits and economic strategies in which it was the state that served its subjects, rather than the other way around.”

With postwar Europe in chaos it was necessary for a strong and active state to organize recovery.  With the success of that recovery came a belief in the efficacy of state-determined policies.

“The state, it was widely believed, would always do a better job than the unrestricted market: not just in dispensing justice and securing the realm, or distributing goods and services, but in designing and applying strategies for social cohesion, moral sustenance and cultural vitality.  The notion that such matters might better be left to enlightened self-interest and the workings of a free market in commodities and ideas was regarded in mainstream European political and academic circles as a quaint relic of pre-Keynesian times: at best a failure to learn the lessons of the Depression, at worst an invitation to conflict and a veiled appeal to the basest human instincts”

“The state, then, was a good thing; and there was a lot of it….The overwhelming bulk of the increase in spending went on insurance, pensions, health, education and housing.”

This love affair with the state would lose some of its ardor over the years, but the social benefits have mostly remained in place; and the feeling of communalism—we are all in this together—has remained strong.  The decline in the wealth share of the upper class was nearly linear over the period from 1910 to 1970.  The post-apogee period of the European state saw a leveling off of this share followed by a gradual increase that continues to this day.  The result was a share of the wealth for the middle class that increased from 5% to 35%.

The US followed a different path through these trying times.  Spared war on its own turf, its greatest social and economic challenges arose from the Great Depression of the 1930s.  It was in this period that it passed the Social Security Act that provided for a pension in retirement, unemployment insurance, aid to families with dependent children, and other welfare benefits that remain the core of the US version of a welfare state (access to healthcare would come much later).

Postwar, there was no rebuilding required except for a reallocation of resources from military production to commercial products—and business was good.  There was, however, the need for a grateful and sympathetic nation to deal with the millions of returning servicemen.

Ira Katznelson provides some insight into these times in the US in his book When Affirmative Action Was White.  Katznelson points out that all social legislation of the time required support by the southern Democratic senators for passage.  In their zeal to insure that no assistance went to blacks, the Social Security Act precluded occupations that were readily available to blacks, and insisted that social welfare programs be administered at the state and local levels rather having national standards imposed.

“Across the nation, fully 65 percent of African Americans fell outside the reach of the new program; between 70 and 80 percent in different parts of the South.  Of course, this excision also left out many whites; indeed, some 40 percent in a country that was still substantially agrarian.  Not until 1954, when Republicans controlled the White House, the Senate, and the House of Representatives, and southern Democrats finally lost their ability to mold legislation, were occupational exclusions that had kept the majority of blacks out of the Social Security system eliminated.”

These southern Democrats are still in power in the South today, they have just changed their labels from Democrat to Republican—and they still have influence over all legislation.

The social welfare legislation was designed not to produce prosperity, but to allow those who might fall into poverty to survive poverty.  Surviving poverty is not the same as reaching a state where accumulation of assets can take place.  It is difficult to see this social legislation as being successful in helping many people reach the middle class.

There is another social support effort that must be discussed, one whose aim was to produce prosperity, but only for a limited class of people.  Katznelson refers to the GI Bill passed to support the returning military as a “social revolution” and claims it “created middle class America.”  Of course, the southern legislators again made sure that blacks were hindered from participating in this program; hence Katznelson’s reference to race-based affirmative action in favor of whites.

About 16 million people had been mobilized for the war effort.  The main features of the law were designed to provide time and resources to help those returning make the transition to civilian life.  Katznelson provides this perspective:

“Even today, this legislation, which quickly came to be called the GI Bill of Rights, qualifies as the most wide-ranging set of social benefits ever offered by the federal government in a single comprehensive initiative....it reached eight of ten men born during the 1920s.”

“One by one, family by family, these expenditures transformed the United States by the way they eased the pathway of the soldiers—the generation that was marrying and setting forth into adulthood—returning to civilian life.  With the help of the GI Bill, millions bought homes, attended college, started business ventures, and found jobs commensurate with their skills....this legislation created middle class America.  No other instrument was nearly as important.”

The scale of the investment in human capital is staggering to those accustomed to today’s parsimonious legislators.

“More than 200,000 used the bill’s access to capital to acquire farms and start businesses.  Veterans Administration mortgages paid for nearly 5 million new homes.  Prior to the Second World War, banks often demanded that buyers pay half in cash and imposed short loan periods, effectively restricting purchase to the upper middle class and upper class.  With GI Bill interest rates capped at modest rates, and down payments waived for loans up to thirty years, the potential clientele broadened dramatically.”

The government spent more on educating its returning soldiers than it spent on rebuilding devastated Europe.

“By 1950, the federal government had spent more on schooling for veterans than on expenditures for the Marshall Plan....On the eve of the Second World War, some 160,000 Americans were graduating from college each year.  By the end of the decade, this number had tripled, to some 500,000.  By 1955, about 2,250,000 veterans had participated in higher education.  The country gained more than 400,000 engineers, 200,000 teachers, 90,000 scientists, 60,000 doctors, and 22,000 dentists....Another 5,600,000 veterans enrolled in some 10,000 vocational institutions to study a wide array of trades from carpentry to refrigeration, plumbing to electricity, automobile and airplane repair to business training.”

“For most returning soldiers, the full range of benefits—the entire cost of tuition plus a living stipend—was relatively easy to obtain....”

The numbers quoted above indicate the scale of the investment in a population that was a bit less than half of the nation’s current population.  Another way to examine the immensity of the program is by looking at expenditures.

“By 1948, 15 percent of the federal budget was devoted to the GI Bill....”

Today, 15 percent of the federal budget would amount to about $600 billion per year.  Such a program was truly large and ambitious, but did it create a middle class?  It is difficult to believe that it would have made no difference, but the plot of wealth share by the upper class shows no effect in the postwar period.  While the GI Bill was certainly a social event, it cannot be considered a revolution.  Revolutions create a legacy.  Where are the subsidized higher education and mortgages today?  

Where an effect is seen on the wealth distribution is in the 1930s when the Social Security Act was implemented to counter the effect of the Depression.  That may have had more to do with the stock market crash than any wealth redistribution.

It is clear that a significant amount of wealth has been accumulated over the last century by the 40% of the population that has been defined to be the middle class.  How exactly has that happened?  It is also clear that the bottom half of the population has been excluded from any gain in the share of wealth.  Why is that so?  Whatever the mechanism, it appears that Europe has been more effective at spreading the wealth.  Its middle class saw its share rise from 5% to 35%, a gain of 30%.  In the US, the middle class share went from about 14% to 25%, a gain of 11%.

Piketty clearly believes that taxation is the driving mechanism.

“….it is important to note that the effect of the tax on capital income is not to reduce the total accumulation of wealth but to modify the structure of the wealth distribution over the long run.  In terms of the theoretical model, as well as in the historical data, an increase in the tax on capital income from 0 to 30 percent (reducing the net return on capital from 5 to 3.5 percent) may well leave the total stock of capital unchanged over the long run for the simple reason that the decrease in the upper centile’s share of wealth is compensated by the rise of the middle class.  This is precisely what happened in the twentieth century—although the lesson is sometimes forgotten today.”

Since Europe has been more successful in building up its middle class, it should be of value to consider how they might have accomplished that.

Clearly one can modify a wealth distribution by confiscating that of one group and giving it to another, but that is not a sustainable scheme.  Piketty explains the current approach:

“….modern redistribution does not consist in transferring income from the rich to the poor, at least not in so explicit a way.  It consists rather in financing public services and replacement incomes that are more or less equal for everyone, especially in the areas of health, education, and pensions.”

This concept of providing services “equal for everyone” could be the key.  Europe was emerging from a series of catastrophes, but it also remembered that it had endured centuries of highly unequal societies.

Consider this input from Tony Judt:

“Why were Europeans willing to pay so much for insurance and other long-term welfare provisions, at a time when life was still truly hard and material shortages endemic?  The first reason is that, precisely because times were difficult, the postwar welfare systems were a guarantee of a certain minimum of justice, or fairness.”

The desire to put in place a system where there was equality of opportunity seemed to be paramount.  A large amount of government spending was required to fund what was necessary.  Different countries went about it in different ways.  From Piketty:

“….a detailed study of French taxes in 2010, which looked at all forms of taxation, found that the overall rate of taxation (47 percent of national income on average) broke down as follows.  The bottom 50 percent of the income distribution pay a rate of 40-45 percent; the next 40 percent pay 45-50 percent; but the top 5 percent and even more the top 1 percent pay lower rates, with the top 0.1 percent paying only 35 percent.”

“….other states, such as Denmark, finance all social spending with an enormous income tax, the revenues from which are allocated to pensions, unemployment and health insurance, and many other purposes.”

So we have a regressive tax system and a progressive tax system that both lead to a more prosperous middle class than exists in the US.  Soaking the rich doesn’t seem to be the motive, or even necessary.  Perhaps the most important thing is the level of tax revenue and the types of services the revenue can provide.

Judt presents this insight into the benefits of the welfare state:

“….although the greatest immediate advantage was felt by the poor, the real long-term beneficiaries were the professional and commercial middle class.  In many cases they had not previously been eligible for work-related health, unemployment or retirement benefits and had been obliged, before the war, to purchase such services and benefits from the private sector.  Now they had full access to them, either free or at low cost.  Taken with the state provision of free or subsidized secondary and higher education for their children, this left the salaried professional and white collar classes with both a better quality of life and more disposal income.  Far from dividing the social classes against each other, the European welfare state bound them closer together than ever before, with a common interest in its preservation and defense.”

Paying taxes begins to look like a good investment.  Turn a block of your wages over to the government and you no longer have to worry about how you might survive a serious medical condition; you no longer have to spend a lifetime saving to pay for your children’s education; and you no longer have another lifetime of worry about saving for retirement.  You come out ahead and can accumulate savings for investing or whatever else you might desire.  In addition, there is that mandated five or six weeks of vacation that you can enjoy with what is left of your income. 

What seems to be happening is taxation as social insurance.  As with all insurance schemes not everyone benefits equally.  The poor see an income floor that protects them from disaster.  The wealthy may pay in more than they get back, but there is value in what they do receive.  The middle class family seems to be in the sweet spot where the return on investment is the greatest.

That is not such a bad way to run a nation.

And now it becomes clear how Europe developed a more prosperous middle class than the US.


Tuesday, December 9, 2014

Life, Death, Feces, and the FDA’s Dilemma

Emily Eakin provided a fascinating description in The New Yorker of the developments occurring in one of the most interesting and rapidly developing areas of medical science. It was titled The Excrement Experiment: Treating Disease with Fecal Transplants

It has become clear that antibiotics can kill some or all of the bacteria that control functions occurring in our digestive system.  While usually beneficial, antibiotics can cause both long-term and short-term changes in our microbial makeup (our microbiome) leading to chronic illnesses and, occasionally, immediate threat of death.  Abnormal excursions in bacterial population can occur in cases that may not be associated with antibiotics and are suspected of causing other serious health problems.  Since so little is understood concerning our microbiome, the most direct way to introduce a “healthy” distribution of bacteria into the digestive tract is via insertion of what has come to be known as a fecal transplant from a healthy person.  This procedure has now received the more official descriptor fecal microbiota transplantation (FMT).

Although fast moving now, the idea of using human excrement to address intestinal issues was first addressed by a Chinese physician well over a thousand years ago.

“In the United States, the first description of FMT appeared sixteen centuries later, in 1958, when Ben Eiseman, a surgeon at the V.A. Hospital in Denver, published four case reports in the journal Surgery. Stool was then widely assumed to be mainly a source of disease; there was little empirical support for the notion that bowel bacteria were important for health. Several of Eiseman’s patients had become deathly ill after the requisite preoperative course of antibiotics, however, and he concluded that the drugs were destroying normal gut flora. He sent a resident to collect stool specimens from a nearby maternity ward, reasoning that pregnant women were likely to be young and healthy and to have avoided antibiotics. The stool, transferred to Eiseman’s patients, saved their lives.”

What was probably occurring with Eiseman’s patients was due to the nasty bug known as Clostridium difficile.  This is a naturally occurring microbe in our digestive systems that is held in control by the array of other bacteria.  However, when massive doses of antibiotics are applied, usually in hospital settings, C. difficile, can be the last microbe left standing since it is highly resistant to known antibiotics.  Left alone it causes severe diarrhea and can be fatal.

“Scattered case reports in the medical literature described C. difficile patients, some on their deathbeds, who received fecal transplants and recovered, often within hours. Then, in January, 2013, The New England Journal of Medicine published the results of the first randomized controlled trial involving FMT, comparing the therapy to treatment with vancomycin for patients with recurrent disease. The trial was ended early when doctors realized that it would be unethical to continue: fewer than a third of the patients given vancomycin recovered, compared with ninety-four per cent of those who underwent fecal transplants—the vast majority after a single treatment. A glowing editorial accompanying the article declared that the trial’s significance ‘goes far beyond the treatment of recurrent or severe C. difficile’ and predicted a spate of research into the benefits of fecal transplants for other diseases.”

For this application, a 94% success rate puts feces in the miracle drug category.  It is not surprising that other applications for FMT would become of interest.  Eakin points out that Eiseman’s results were not totally ignored.

“For years, virtually the only proponent of FMT was Thomas Borody, a gastroenterologist in Sydney, Australia, who, in 1988, after reading Eiseman’s paper, decided to try a fecal transplant on a patient who had contracted an intestinal ailment in Fiji. The patient recovered, and Borody estimates that he has since performed the procedure five thousand times, including, with stool supplied by his father, on his mother, who suffered from crippling constipation. In addition to C. difficile patients, Borody says that he has successfully treated people with autoimmune disorders, including Crohn’s and multiple sclerosis.”

It is not surprising that researchers have begun to postulate and investigate all sorts of disorders that might be associated with dysfunctions of the microbiome, from obesity to autism.

“The Cleveland Clinic named fecal transplantation one of the top ten medical innovations for 2014, and biotech companies are competing to put stool-based therapies through clinical trials and onto the market. In medicine, at any rate, human excrement has become a precious commodity.”

It is also not surprising that with such a readily available product there is a lot of self medicating going on.  Eakins goes into great detail providing the history of Tom Gravel who suffered from Crohn’s disease which causes inflammation of a bowel section accompanied by pain and severe diarrhea.  Gravel suffered through ineffective medical interventions for over three years before he decided to try self-administered fecal doses provided by a presumably healthy neighbor. Gravel was able to obtain relief in this way and intends to continue this procedure indefinitely.

On the one hand there are how-to books and YouTube videos demonstrating how individuals can try the procedure on their own; on the other hand you have the FDA which is legally bound to treat excrement as a drug.

“The agency defines a drug as any material that is intended for “use in the diagnosis, cure, mitigation, treatment, or prevention of disease.” An exception has been written into law for body parts, including skin, bone, and cartilage, which are classified as tissue. But the statute excludes most human secretions from this category.”

“Substances labeled drugs are subject to a rigorous approval process. Pharmaceutical companies typically spend many years and millions of dollars researching and testing a drug before submitting it to the agency for approval. Until the F.D.A. approved a fecal-transplant therapy, the procedure would be considered experimental. In order to offer it to patients, doctors would need to file an investigational new-drug application, or I.N.D., and obtain the agency’s permission.”

“I.N.D.s are intended to capture every aspect of a prospective therapy in exacting detail….one gastroenterologist said that it had taken her hundreds of hours to complete the paperwork. Many others lacked the resources and staff to devote to such a task.”

The FDA is torn between the need to follow a lengthy process designed to insure patient safety and verify effectiveness of drugs, and evidence that thousands of sufferers could be provided immediate relief by a simple and inexpensive procedure.

“’What do we do with the fifteen thousand patients who are really desperate for something that works?’ a doctor from the Mayo Clinic asked F.D.A. officials. ‘If your mother shows up with severe or recurrent C. difficile, are you going to not offer something that you know how to do safely, effectively, and say, ‘I can’t do it because the regulatory agencies in the United States have decided that this requires a special licensure’?”

The FDA appears to be trying to be accommodating.

“….the F.D.A. declared an exception for doctors treating recurrent C. difficile: they would be allowed to perform fecal transplants without an I.N.D. In revising its position, the agency said that it would be exercising ‘enforcement discretion’—a temporary measure. As an F.D.A. spokeswoman later explained in an e-mail, the directive did not reflect a change of policy; it was intended as an acknowledgment that ‘there are often few or no other treatment options for these patients’.”

There are good reasons for the FDA to follow the traditional path for drug approval.  However, as Eakin points out, that encourages sufferers to try the procedure on their own.  These people are, in fact, producing a wealth of clinical data that is going to be lost.  A less onerous process that allows doctors to participate in administering doses while monitoring the patients and reporting the results would rapidly accumulate more data than drug manufacturers could ever produce.

The seemingly irreconcilable problem is that excluding excrement from the requirement that it be treated as a drug would require congressional legislation.  Even if congress were so moved to act, it is unlikely that the pharmaceutical industry would ever give them permission to threaten a potentially large profit source.

Consequently, we are being forced to endure an expensive and lengthy process that only drug companies can afford to pursue.  Eventually they will claim to have turned shit into gold and sell it back to us at as high a price as the market will bear.

The community has already decided on what to call these wonder pills when they finally emerge: crapsules.


Friday, December 5, 2014

Education: Are International Tests Worth Anything?

Diane Ravitch is not a fan of international tests that compare the performance of students from different countries.  She believes that the observation that US students, on average, perform around the middle of the pack has led to the conclusion that this is a national tragedy requiring strong corrective measures in our schools.  Ravitch identifies the problem as being not with our school systems, but with our history of multigenerational poverty, and racial and ethnic discrimination.  She expresses her views in her book Reign of Error: The Hoax of the Privatization Movement and the Danger to America's Public Schools.

Ravitch provides an interesting perspective on the issue of performance testing.  She wishes us to conclude that striving to be at the top of the testing ladder is not a healthy strategy for a nation, and, in fact, is counterproductive.  She introduces us to a study performed by Keith Baker who was a long-time analyst in the Department of Education.

“He [Baker] reviewed the evidence and concluded that for the United States and about a dozen of the world’s most advanced nations ‘standings in the league tables of international tests are worthless.  There is no association between test scores and national success, and, contrary to one of the major beliefs driving U.S. education policy for nearly half a century, international test scores are nothing to be concerned about.  America’s schools are doing just fine on the world scene.”

Baker looked at the results of an early international student comparison performed in 1964.

“Baker looked at the per capita gross domestic product of the nations whose students competed in 1964.  He found that ‘the higher a nation’s test score 40 years ago, the worse its economic performance on this measure of national health—the opposite of what the Chicken Littles raising the alarm over the poor test scores of U.S. children claimed would happen.’  The rate of economic growth improved, he held, as test scores dropped.  There was no relation between a nation’s productivity and its test scores.”

How might this make sense?  The goal of education is not just to provide a student with knowledge, it is to teach the student how to acquire knowledge on his/her own and to help them learn how to use knowledge effectively.  Neither of these two things shows up on tests. 

“A certain level of educational achievement may be considered ‘a platform for launching national success, but once that platform is reached, it may be bad policy to pursue further gains in test scores because focusing on the scores diverts attention, effort, and resources away from other factors that are more important determinants of national success’.”

“The United States has been a successful nation, Baker argues, because its schools cultivate a certain ‘spirit’ which he defines as ‘ambition, inquisitiveness, independence, and perhaps most important, the absence of a fixation on testing and test scores’.”

Such a conclusion would certainly be remarkable.  Let us look closer now at what Baker actually provided in his study Are International Tests Worth Anything?

Baker’s paper was published in 2007.  The early study he referred to was the First International Mathematics Study (FIMS).

“FIMS was administered in 1964 to samples of 12-year-olds in 11 nations. Today’s world is largely a world created and operated by the now 55-year-old FIMS generation. If there is a connection between high test scores and national success, it will show up in looking at how well the 1964 FIMS scores predicted where nations are today. Among the 11 FIMS nations, the U.S. finished second to last (ahead of Sweden).”

The nations participating in this study were Australia, Belgium, England, Finland, France, Germany (FRG), Israel, Japan, Netherlands, Scotland, Sweden, and the United States.  England and Scotland are combined in order for Baker to make his point.  He wishes to evaluate how these nations have evolved between 1964 and 2002 in order to determine any correlation between test scores and national performance.  He evaluates the quantities wealth, rate of growth, productivity, quality of life, democracy, and creativity.  This is his conclusion with respect to wealth.

“First, and perhaps most important to a nation, is the creation of wealth. The best measure of generating wealth is per-capita GDP adjusted for cost of living differences, or purchasing power parity (PPPGDP). The wealth of nations scoring higher than the U.S. on FIMS averaged 73% of the per-capita income in the U.S. in 2002.   FIMS scores in 1964 correlate at r = -0.48 with 2002 PPP-GDP. In short, the higher a nation’s test score 40 years ago, the worse its economic performance on this measure of national wealth….”

What Baker seems to be saying is that since the US was wealthier than the countries whose students knew more about math than the US in 1964 and the US is still wealthier, then the poor test performance did not matter.  But wouldn’t the growth in wealth over the 1964-2002 interval be a more relevant comparison?  Many of the countries in the study were still in a rebuilding mode trying to recover from the effect of World War II in 1964.  Their wealth had been depleted, but their economic growth would have been strong.

It should be noted that GDP is more closely aligned with income than with wealth.  Wealth and its growth will depend on tax and saving rates and could vary dramatically from country to country for reasons that have nothing to do with education or economic health.  The accumulation of wealth in a nation might not even be considered a good thing, let alone be targeted as a measure of economic prowess.  Consider this chart provided by Thomas Piketty in his book Capital in the Twenty-First Century.



Using the measure of private capital (wealth) divided by national income (essentially GDP) Italy would have to be considered the healthiest economy today.  In any event, the results can change dramatically over time and the US is far from the dominant nation.  Perhaps per capita GDP growth over time is better indication economic efficiency.

Baker chooses to address GDP growth, but he limits it to the decade before 2002.  He apparently wishes to look at a time when the children of 1964 would be of an age where they might be expected to control their nations.  That implies that the children of 1964 were somehow unique and different from those who came before or after—an unlikely assumption

“One can argue that since the U.S. had a big post-WW II economic lead over the rest of the world, the rate of economic growth is at least as important as GDP as an indicator of national achievement.  The nations that scored better than the U.S. in 1964 had an average economic growth rate for the decade 1992-2002 of 2.5%; the growth rate for the U.S. during that decade was 3.3%. The average economic growth rate for the decade 1992-2002 correlates with FIMS at r = -0.24. Like the generation of wealth, the rate of economic growth for nations improved as test scores dropped.”

One hopes that Baker used per capita GDP growth because most European countries, along with Japan, have experienced stagnant or decreasing populations, a factor that would decrease their GDP figures relative to that of the relatively fast-growing US population.  Baker does not designate which data he used.  Let us then turn to Piketty and his data again.  He provides per capita GDP growth rates for North America and Western Europe that span the period of interest.  The numbers for North America would be dominated by US values because of its large population.



Using per capita GDP growth as a metric for the efficacy of a given school system would seem to indicate that the higher scoring European nations of 1964 had better scores and better economies than the US at the time.  Eventually, everyone appears to be headed for some common level of excellence.  Trying to use economic factors to determine the strength of a given approach to learning is a highly uncertain process.

Baker wishes to make the case that the US has been better at fostering creativity because it has produced the most patents per capita compared to other countries.

“A good school system should foster creativity.  The number of patents issued in 2004 is one indicator of how creative the generation of students tested in 1964 turned out to be. The average number of patents per million people for the nations with FIMS scores higher than the U.S. is 127. America clobbered the world on creativity, with 326 patents per million people.”

Unfortunately, interpreting patent numbers also requires a number of qualifications.  The race to produce patents can be more an indication of a nation’s business composition and business practices than a direct indicator of creativity.  In addition, most patents arise in technical fields where advanced degrees are required to attain competence.  University technical departments in the US are typically heavily endowed with students from other countries.  Many of the patents that Baker is so proud of are actually being produced by students educated by school systems that he would claim are inferior to ours because they perform well on international tests.

The gold standard in international testing is currently PISA (Program for International Student Assessment).  It is an OECD project that has invited many non-OECD countries to participate.  It tests 15-year-olds in math, reading, and science competency, and tries to deduce from the results which factors are effective in educating students.  The PISA people also conduct surveys to deduce non-educational characteristics of those participating so that factors like income level can be assessed in comparing results between students of the varying countries.  PISA also produces country assessments which explain what they believe to be relative lessons learned from the testing.  The latest test was performed in 2012 and the results were released in 2014.  The country rankings and the assessment of the US students can be found here.

The first PISA test was in 2000.  It has been given every three years since.  Baker had available early results with which to compare with his FIMS data.  He drew these conclusions:

“On these indicators of success, the nations that scored at the PISA average generally outperformed those scoring either above or below average. For example, percapita GDP was $22,495 for the 11 nations scoring above average, $34,414 for the five average nations, and $16,375 for the 11 below-average nations. The same pattern holds for quality of life, democracy, and creativity as measured by patents.” 

“International comparisons on many factors show that Norway is the best place in the world to live, and, like the U.S., Norway scored right at the PISA average. Mediocre test scores correlate with better, more successful countries than do top scores (or lower scores). Mediocrity in test scores is, for nations, a good thing! This finding is highly counterintuitive. Why should it be
so?”

Baker provides interesting and compelling reasons why average test performance by economically developed countries might a good thing.  There is more to life than studying for a given test.  Even the Asian countries that do so well on PISA would agree that having children spend all day year after year preparing for a national test that will determine their future is an unhealthy environment, even if it makes them proficient in PISA.

Baker’s explanation is presented again here.

“A certain level of educational achievement may be considered a platform for launching national success, but once that platform is reached, it may be bad policy to pursue further gains in test scores because focusing on the scores diverts attention, effort, and resources away from other factors that are more important determinants of national success.”

This is a wonderful hypothesis, but like so many other explanations for academic performance it is just a hypothesis.  His paper does not provide confirmation.

Since we are in the mode of evaluating hypotheses, here is another one for consideration.

It is not difficult to see how a country with a poor school system might still succeed economically.  Such a country will produce a large number of intelligent, well-educated, and creative people in spite of general academic conditions.  The important factor is providing sufficient numbers with the opportunity to use their skills in a productive manner.  Knowledge, creativity and opportunity must come together.  Countries that are efficient at providing opportunities to excel can prosper even if a large fraction of the population is poorly educated.

That is yet another way to view the US.


Tuesday, November 25, 2014

Asia’s Children Pay the Price for Test-Driven Education Systems: Myopia

Amanda Little has produced an interesting article on the Chinese education system in Bloomberg Businessweek.  It has the intriguing title Fixing the Best Schools in theWorld.  It focuses on Qiu Zhonghai who is the school principal at Qibao one of the highest ranking high schools in Shanghai. 

“Shanghai public schools placed first worldwide on the recent PISA (Programme for International Student Assessment) exams, which are administered every three years by the Paris-based Organisation for Economic Co-operation and Development. The average scores of Shanghai students in reading, science, and mathematics were more than 10 percent higher than the scores of students in the legendary Finnish school system, which had been top-ranked until 2009, when Shanghai was first included in the testing, and about 25 percent higher than those of the U.S., which ranked 36th.”

The notion of these being in need of fixing arises from the realization that preparing students to do well on a test is not the best way to build life skills in a nation’s children.  China has a long tradition of using a single examination to determine those worthiest to succeed in national life.  Think of it as meritocracy taken to the extreme.

“Suicide is the top cause of death among Chinese youth, and depression among students is widely attributed to stress around high-pressure examinations, especially the dreaded gaokao, which seniors take to determine what university, if any, they can attend. The gaokao is like the SAT on steroids: eight hours of testing in math, science, Chinese language, and a foreign language, that takes place over two days, usually in June of a student’s senior year. ‘It commands the high school student’s whole life….They spend 14 hours a day, six days a week, year after year, cramming their brains full of facts for this one test’.”

One Chinese observer, Jiang Xueqin provided this quote:

“Forcing students to study for gaokao is essentially a form of lobotomy—it radically narrows their focus….The effects of chronic fact-cramming are something akin to cutting off their frontal lobes.”

The result is that parents who can afford to are beginning to move their children to private schools that are free from the gaokao system.  The very wealthy have also been looking to send their children abroad, particularly to the United States, to get what they believe will be a more effective education.

“A trend is emerging in which more and more elite students are opting for private schooling outside the gaokao system, says Jiang. If China doesn’t want to lose its best and brightest to U.S. and European universities, eventually Chinese universities will also have to accept students who opt out of the gaokao.”

The daily schedule at Qibao indicates the intensity of this academic environment and suggests why a student might wish to escape to a place with more time for intellectual adventure.

“Their day begins at 6:20 a.m. The 2,000 students at Qibao, half of whom live on campus, gather on an Olympic-size soccer field to the sound of marching band music booming from speakers. Wearing blue-and-white polyester exercise suits with zip-front jackets that serve as their daily uniforms, these 10th- to 12th-graders perform a 20-minute synchronized routine that’s equal parts aerobics and tai chi. Group exercise is followed by a 20-minute breakfast and self-study period, and then, beginning at 7:40, a morning block of five 40-minute classes. Then there’s an hour for PE and lunch, followed by an afternoon block of four 40-minute classes that ends at 4:30 p.m. Evening study hall is 6:30 p.m.-9:30 p.m., then it’s lights out at 10. Of their 14 weekly courses, 12 cover the core national curriculum, which includes math, chemistry, physics, Chinese, English, Chinese literature, and geography. Two are elective courses—Qibao offers nearly 300, ranging from astronomy and paleontology to poetry, U.S. film and culture, visual arts, cooking, and a driving course with simulators.”

This is actually a rather moderate schedule compared to some schools.

“The Qibao schedule is relaxed compared with the infamous Maotanchang High School, for instance, which requires its 10,000 students, all of whom board in a remote town in central China, to wake at 5:30 a.m. to begin their daily schedule of 14 classes—every one designed to optimize their gaokao scores—ending at 10:50 at night.”

What was most troubling about Little’s article was the indication that this regimen was not only unproductive as a national policy, but it was also physically harmful to the children.

“Twice a day at Qibao, at 2:50 every afternoon and 8:15 at night, classical flute music floats through the speakers of every classroom and study hall. It’s a signal to students to put down their pencils, close their eyes, and begin their guided seven-minute eye massage. ‘One, two, three, four, five, six, seven, eight. … One, two …’ the teacher chants as students use their thumbs, knuckles, and fingertips to rub circles into acupressure points under the eyebrows, at the bridge of the nose, the sides of the eyes, temples, cheeks, the nape of the neck, and then in sweeping motions across the brows and eyelids.”

“The eye exercises are a government requirement in all Chinese public schools, a response to an epidemic of myopia caused by too much studying. By the end of high school, as many as 90 percent of urban Chinese students are nearsighted—triple the percentage in the U.S. There’s debate about whether the massage exercises actually help, but the students look happy to take any breaks they can get.”

 An article in The Economist, Myopia:Losing focus, provides more background on this student myopia trend and points out that it pertains not only to Chinese students.

“The incidence of myopia is high across East Asia, afflicting 80-90% of urban 18-year-olds in Singapore, South Korea and Taiwan. The problem is social rather than genetic. A 2012 study of 15,000 children in the Beijing area found that poor sight was significantly associated with more time spent studying, reading or using electronic devices—along with less time spent outdoors. These habits were more frequently found in higher-income families, says Guo Yin of Beijing Tongren Hospital, that is, those more likely to make their children study intensively. Across East Asia worsening eyesight has taken place alongside a rise in incomes and educational standards.”

Spending too much time studying indoors rather than having sufficient outdoor activity seems to be the problem.

“At the age of six, children in China and Australia have similar rates of myopia. Once they start school, Chinese children spend about an hour a day outside, compared with three or four hours for Australian ones. Schoolchildren in China are often made to take a nap after lunch rather than play outside; they then go home to do far more homework than anywhere outside East Asia. The older children in China are, the more they stay indoors—and not because of the country’s notorious pollution.”

“The biggest factor in short-sightedness is a lack of time spent outdoors. Exposure to daylight helps the retina to release a chemical that slows down an increase in the eye’s axial length, which is what most often causes myopia. A combination of not being outdoors and doing lots of work focusing up close (like writing characters or reading) worsens the problem. But if a child has enough time in the open, they can study all they like and their eyesight should not suffer, says Ian Morgan of Australian National University.”

In China, a short-sighted school system produces short-sighted children.

In the bizarre world of education theorizing, commentators in the United States bemoan the fact that we don’t have a system that matches the Asians in test performance, while the Asians wish they could improve the education they provide by having a system more like that in the United States.


Thursday, November 20, 2014

Capitalism’s Paradox: Too Much Profit, Too Little Demand

There are strange things going on in the economies of some of the most developed nations.  Corporations in these countries seem to be earning more money than they know what to do with.  An article in The Economist provides this chart:



The amount of funds that corporations are sitting on is enormous in these countries.  To put this in perspective, the US has only about 11% of GDP banked in corporate savings, about $1.9 trillion.  In the darkest hours of the Great Recession the federal government could only put up about $831 billion to save the economy from disaster.  Now, with low growth still a problem, companies are sitting on more than twice that with no apparent intention to put it to use.

The problem is actually worse because many companies use their excess profits to buy back their own shares.  There are two ways to attain growth in stock price.  One is to demonstrate a business model that projects growth and even greater profits in the future, the other is to purchase your own shares on the market and drive the price higher.  This source indicates that US corporations spent over $500 billion on these buy-backs in the past year, a near record amount.

What is peculiar about these findings is that high profits should be associated with a healthy economy that is providing incentive for expansion.  If business is so good, why are not more companies seeking to reinvest their earnings and go for greater growth?

Marx thought that capitalism carried within it the seeds of its own destruction.  Could we be witnessing capitalist economies wandering into some sort of dead end from which they cannot extract themselves?

There are two obvious explanations for a situation in which large profits are earned yet there is no incentive to reinvest the profits in growth.  In one case demand for products is low and there is no expectation that an improvement is forthcoming.  In the second case the company has essentially a monopoly and does not anticipate any potential growth in customers.

It is an interesting exercise to consider whether or not we have true competition in our major industries, or whether we have allowed a few large corporations to dominate in each arena and pretend to compete while tacitly agreeing that a huge market can be shared two or three ways and all can be wealthy.  Much is said about the high tech competition between Apple and Samsung, and between Microsoft and Google, but is any one of these in danger of being seriously injured by competition from the other?  They all have healthy market shares and enough money on hand to purchase any upstart who might choose to disturb the game.  In a situation where no one is particularly interested in competing on price, how does profit get limited?

The situation where no expectation of growth in demand exists does have a distinct Marxian flavor.  Most corporation executives will claim that low demand is the reason why they are not investing more in their businesses.  Lack of demand usually means there are too few consumers with money to spend.  Coupled with healthy profits, this suggests that consumers lack money not interest, and would purchase more if they could. 

The article in The Economist focuses on the Asian countries where the cash holdings are incredibly large.

“Japanese firms hold ¥229 trillion ($2.1 trillion) in cash, a massive 44% of GDP. Their South Korean counterparts hold 459 trillion won ($440 billion) or 34% of GDP. That compares with cash holdings of 11% of GDP, or $1.9 trillion, in American firms. If East Asia’s firms spent even half of their huge cash hoards, they could boost global GDP by some 2%.”

Consider the correlation between corporate earnings and wages for workers.

“In South Korea, company earnings have grown faster than wages for more than a decade. In Japan wages fell 3.5% between 1990 and 2012, while prices rose by 5.5%.”

“And East Asia’s economies have also suffered. If they had been paid more, Japanese consumers might have spent more. Korean households, struggling with rapidly-growing debt-burdens, have also been squeezed.”

The situation in the US is similar.

The cash available for spending by most consumers comes from earned wages.  However, earned wages have not been keeping up with prices of things they feel a need to purchase, and this has been going on for a very long time.  Clearly this cannot go on indefinitely and consumption must inevitably plummet.  Would corporations be in a healthier state if they had recycled more of their profits back into the economy?

Are we approaching a Marxian moment when corporate greed will be its own undoing?

Stay tuned.


Monday, November 17, 2014

Our Creativity and Productivity as We Age

Ezekiel J. Emanuel produced a rather interesting article recently in The Atlantic: Why I Hope to Die at 75.   His title is a bit misleading; he does not actually wish to die at 75; rather, he believes that at that age he should not take measures to extend his life.  From his view, at age 57, life at 75 would be sufficiently degraded that it would no longer be worth the effort to take measures that might extend his life.

“But here is a simple truth that many of us seem to resist: living too long is also a loss. It renders many of us, if not disabled, then faltering and declining, a state that may not be worse than death but is nonetheless deprived. It robs us of our creativity and ability to contribute to work, society, the world. It transforms how people experience us, relate to us, and, most important, remember us. We are no longer remembered as vibrant and engaged but as feeble, ineffectual, even pathetic.”

This logic and his conclusion was rather surprising to those of us who are approaching or have already moved past that target age.  In Aging: Why Would a 57-Year-Old Man Want to Die at 75? a counter argument was presented that pointed out that those who he described as “feeble, ineffectual, and even pathetic” seemed to believe that those years he didn’t wish to live were actually the most contented of their lives.

What is of interest here is Emanuel’s statement that aging “robs us of our creativity and ability to contribute to work.”  He makes these claims:

“Even if we aren’t demented, our mental functioning deteriorates as we grow older. Age-associated declines in mental-processing speed, working and long-term memory, and problem-solving are well established. Conversely, distractibility increases. We cannot focus and stay with a project as well as we could when we were young. As we move slower with age, we also think slower.”

“It is not just mental slowing. We literally lose our creativity.”

“….the fact is that by 75, creativity, originality, and productivity are pretty much gone for the vast, vast majority of us.”

He then seems to insult his academic colleagues who spend more time mentoring students in the latter years of their careers instead of focusing on their individual efforts.  The implication is that this occurs because of an age-related decrease in capability, rather than as a logical career choice.

“Mentorship is hugely important. It lets us transmit our collective memory and draw on the wisdom of elders. It is too often undervalued, dismissed as a way to occupy seniors who refuse to retire and who keep repeating the same stories. But it also illuminates a key issue with aging: the constricting of our ambitions and expectations.”

“We accommodate our physical and mental limitations. Our expectations shrink. Aware of our diminishing capacities, we choose ever more restricted activities and projects, to ensure we can fulfill them.”

To support his contentions, Emanuel presents this chart attributed to Dean Keith Simonton a psychology professor at the University of California at Davis.



Simonton does seem to be the preeminent scholar when it comes to understanding the aging and productivity of people who have demonstrated a significant degree of creativity.  Let us see what he actually has to say on the subject.  Simonton produced a short summary of relevant conclusions in bullet form here.  A copy of one of his articles is provided in concise but slightly longer form here.  The latter source will be used in the present article.

Simonton tells us that we should be careful in interpreting charts such as the one utilized by Emanuel.  They consist of averages over many types of activities, some of which have quite different time histories.  He also suggests that using chronological age as the variable is misleading because it is the career itself that has a time dependence of its own.  People who choose to pursue a particular creative activity starting later in life will follow a similar curve, but it will be shifted along the age axis.

“….we introduce a central finding of the recent empirical literature: The generalized age curve is not a function of chronological age but rather it is determined by career age….People differ tremendously on when they manage to launch themselves in their creative activities.  Whereas those who get off to an exceptionally early start may….find themselves peaking out early in life, others who qualify as veritable ‘late bloomers’ will not get into full stride until they attain ages at which others are leaving the race.”

This introduces the notion of a career trajectory that is more a function of career duration than physical age.  The shape of this productivity dependence on career duration varies considerably from one creative activity to another.

“The occurrence of such interdisciplinary contrasts endorses the conjecture that the career course is decided more by the intrinsic needs of the creative process than by generic extrinsic forces, whether physical illness, family commitments, or administrative responsibilities.”

Simonton provides some examples of differing productivity histories for various creative disciplines.

“Especially noteworthy is the realization that the expected age decrement in creativity in some disciplines is so minuscule that we can hardly talk of a decline at all.  Although in certain creative activities, such as pure mathematics and lyric poetry, the peak may appear relatively early in life, sometimes even in the late 20s and early 30s, with a rapid drop afterwards, in other activities, such as geology and scholarship, the age optimum may occur appreciably later, in the 50s even, with a gentle, even undetectable decrease in productivity later.”

Presumably, Emanuel would categorize himself as an academic scholar.  If he had read Simonton carefully, he might have concluded that as such he had a right to expect a long and productive life rather than assume that the death of his creativity was imminent.

Simonton provides us with another insight into age and productivity: even though less is produced at later stages of a career, the “quality ratio” is undiminished.

“….if one calculates the ratio of creative products to the total number of offerings at each age interval, one finds that this ‘quality ratio’ exhibits no systematic change with age.  As a consequence, the success rate is the same for the senior colleague as it is for the young whippersnapper.  Older creators may indeed be producing fewer hits, but they are equally producing fewer misses as well.”

This allows Simonton to suggest this startling conclusion:

“This probabilistic connection between quantity and quality, which has been styled the ‘constant probability of success’ principle….strongly implies that an individual’s creative powers remain intact throughout the life span.”

In other words, the decrease in creative output as a career progresses can be caused by many factors other than age.  Perhaps a professor at a university will choose to spend more time with students later in his career.  That is, after all, what professors are supposed to do.  Others may find a new creative outlet and gradually transition to a new discipline.  Artists may try to improve their “quality ratio” by investing more time and effort into each piece.

Simonton finishes with this conclusion:

“….the career trajectory reflects not the inexorable progression of an aging process tied extrinsically to chronological age, but rather entails the intrinsic working out of a person’s creative potential by successive acts of self-actualization.”

Damn!  We might as well live as long as we can.



Ezekiel Emanuel is director of the Clinical Bioethics Department at the U.S. National Institutes of Health and heads the Department of Medical Ethics & Health Policy at the University of Pennsylvania.

Wednesday, November 12, 2014

Evolution and the “Sharing Gene”

Sociobiology is the name given to the attempt to associate observed social traits in humans with genetically acquired traits favored by natural selection.  Attempts to describe human social characteristics within this framework have been controversial, mainly because it is difficult to separate behaviors that might have been learned through social interaction from those that might be innate.  It is also difficult to explain how social characteristics might have become genetically embedded by natural selection.  While the behavior of an individual can certainly be affected by genetic dispositions, these can be characterized as a form of genetic noise leading to individual variation within the population, not a species-wide tendency.  Those who take positions on the issue seem to be driven as much by philosophy as by science.

One of the difficulties faced by sociobiologists is the need to explain the observed altruistic behavior in humans.  People do cooperate even when the benefits of cooperation are unevenly distributed and it might not be in their immediate self-interest.  People have been observed to sacrifice even their lives to protect the lives of others.  How does one explain this type of behavior using natural selection?

The simplest Darwinian approach to evolution is based on the presumed desire for individual organisms to strive to insure that their genes are propagated forward into the gene pool.  This is the “survival of the fittest” prescription.  The fittest is the one who produces the most offspring that make it into the next generation.  The mechanism of natural selection should then deselect any genetically-driven behavior in an individual organism that diminishes its opportunity to procreate and propagate its genes.  Such a trait should then disappear.  What is left is an arena in which individuals compete with each other to breed, to eat, and to control territory.

The situation becomes more complex when animals form kinship relationships and begin living in bands.  All sorts of social constraints and other behaviors become operative, and they vary considerably from species to species.  People seem to have little trouble believing that animal behavior is genetically based, but tend to resist the notion that human interactions are so constructed.

Humans live in groups that impose constraints on individual actions, and they have kin relationships that also impose constraints.  However, from tribe to tribe and society to society, these social rules can vary dramatically.  Consequently, the specific constraints cannot be genetic in nature; it must be the general willingness to live under a set of rules in order to enhance the survival of the band, tribe, or nation that might be genetic.  If this is correct, then how exactly did natural selection propagate this characteristic?

Edward O. Wilson is the scientist most closely associated with sociobiology.  In his book TheSocial Conquest of Earth, he postulates that human evolution was dominated by the need for bands of humans to compete with one another for resources.  This is an extension of the “survival of the fittest” theme.

“Our bloody nature, it can now be argued in the context of modern biology, is ingrained because group-versus-group was a principle driving force that made us what we are.  In prehistory, group selection lifted the hominids that became territorial carnivores to heights of solidarity, to genius, to enterprise.  And to fear.  Each tribe knew with justification that if it was not armed and ready, its very existence was imperiled.  Throughout history, the escalation of a large part of technology has had combat as its central purpose.”

“It should not be thought that war, often accompanied by genocide, is a cultural artifact of a few societies.  Nor has it been an aberration of history, a result of the growing pains of our species’ maturation.  Wars and genocide have been universal and eternal, respecting no particular time or culture.”

The groups that survive this competition are presumably those with the strongest social traits—those that allow its members to cooperate in battle even though some can expect to suffer more than others.  Some advantage of group membership must compete with an individual’s innate tendency to act selfishly to enhance its own survival.  Wilson sees this conflict between individual and group benefits as an inevitable characteristic of human societies.

“An unavoidable and perpetual war exists between honor, virtue, and duty, the products of group selection, on one side, and selfishness, cowardice, and hypocrisy, the products of individual selection on the other.”

However, genetic traits that support group collaboration within individuals must somehow get distributed to the group if group-beneficial behavior is to dominate.  Or, the genetic content of the group as a whole must be selected by superior procreative performance versus less effective groups.  Wilson refers to something called “multilevel selection.”  The process by which this occurs is a bit murky.

“Multilevel selection consists of the interaction between forces of selection that target traits of individual members and other forces of selection that target traits of the group as a whole.  The new theory is meant to replace the traditional theory based on pedigree kinship or some comparable measure of genetic relatedness.”

  Wilson’s concept of human evolution being dominated by “universal and eternal” warfare was discussed in AreHumans Inherently Warlike? and found wanting.  The supposition that humanity’s characteristics were honed in group competition in the context of “universal and eternal” warfare may find some support in the brief moment that is recorded history, but what about the previous million years or so.  In order to seek a genetic basis for subsequent evolution, one should look earlier in time to ascertain the reasons why people felt compelled to form groups in the first place.

What characterizes humans, and differentiates them from the other apes, is the development of social skills.  Chimpanzees are quite capable of conducting warfare with another band of chimps.  In fact, when human soldiers want to conduct an operation silently they use hand signals that would easily be understood by a chimp.  So why would warfare be a means of selecting the development of complex social skills?

Wilson provides his thoughts on the life of the early hunter-gatherer.

“Throughout their evolutionary past, during hundreds of thousands of years, they had been hunter-gatherers.  They lived in small bands, similar to present-day surviving bands composed of at least thirty and no more than a hundred or so individuals.  These groups were sparsely distributed.”

“Between 130,000 and 90,000 years ago, a period of aridity gripped tropical Africa far more extreme than any that had been experienced for tens of millennia previously.  The result was the forced retreat of early humanity to a much smaller range and its fall to a perilously low level in population….The size of the total Homo sapiens population on the African continent descended into the thousands and for a long while the future conqueror species risked complete extinction.”

Early humans faced periods when resources were so scarce that they faced near extinction.  Is this a situation in which one might expect starving people to go looking for someone to fight with?  Might they not have more wisely looked for someone who would be able to help them?

Early hunter-gatherers lived from hand-to-mouth.  They didn’t have stockpiles of food that someone would try to steal.  They were also subject to extreme variations in the success of their hunting and gathering.  If a band was, on the average, finding just enough food to survive, then some individuals would have gathered less than necessary for survival, and some would have gathered more than necessary.  If they had not learned to share food, eventually each individual would have an extended period of lean foraging, starve, and the band would disappear.

Sarah Blaffer Hrdy describes a study performed by observing the history of a present-day band of hunter-gatherers in her book Mothersand Others: The Evolutionary Origins of Mutual Understanding.  It illustrates how even in relatively benign times sharing within a band was necessary.

“The sporadic success and frequent failures of big-game hunters is a chronic challenge for hungry families among traditional hunter-gatherers.  One particularly detailed case study of South American forgers suggests that roughly 27 percent of a time a family would fall short of the 1,000 calories of food per person per day needed to maintain body weight.  With sharing, however, a person can take advantage of someone else’s good fortune to tide him through lean times.  Without it, perpetually hungry people would fall below the minimum number of calories they needed.  The researchers calculated that once every 17 years, caloric deficits for nonsharers would fall below 50 percent of what was needed 21 days in a row, a recipe for starvation.  By pooling their risk, the proportion of days that people suffered from such caloric shortfalls fell from 27 percent to only 3 percent.”

What is the purpose of living in a band if not to benefit from cooperation with other members?

The sharing of food is only one example of a benefit.  Hrdy believes that one of the great advances made by humans occurred when women learned to share the responsibility for raising children.  That allowed an individual mother more time to gather food and allowed her to give birth more frequently.  This and other cooperative activities could only exist if humans developed the ability to interpret, understand, and empathize with the feelings and intentions of others.

Hrdy believes this capability to share and cooperate has become hard-wired within us.

“From a tender age and without special training, modern humans identify with the plights of others and without being asked, volunteer to help and share, even with strangers.  In these respects, our line of apes is in a class by itself.”

“This ability to identify with others and vicariously experience their suffering is not simply learned: It is a part of us.”

Evolution and survival of the fittest need not be viewed as a competition to reward the strongest; it can also be considered as a means of selecting those most effective at cooperating.  Robert Trivers formulated an explanation for how repeated acts of altruism (or cooperation/sharing) could lead to natural selection of the tendency to perform those acts.  The key is to realize that even the earliest of humans realized that self-interest should be a long-term issue not one of immediate gratification

“In a 1971 paper Robert Trivers demonstrated how reciprocal altruism can evolve between unrelated individuals, even between individuals of entirely different species…. As Trivers says, it ‘take[s] the altruism out of altruism.’ The Randian premise that self-interest is paramount is largely unchallenged, but turned on its head by recognition of a broader, more profound view of what constitutes self-interest.”

One can follow Wilson and view humans as having been driven to their current state by “universal and eternal” warfare.

Or, one can view humans as having arrived at their current state via “universal and eternal” cooperation and sharing.

I know which view I prefer.


Lets Talk Books And Politics - Blogged