Sunday, September 30, 2012

Venetian lessons in economics

Venice has never had substantial natural resources. Yet Venetians, as is obvious to casual visitors who cannot help but notice the many grand palaces, ornate architecture, and extensive decoration, once had great wealth. Venice earned its wealth from two sources: trade and the inventiveness of its people. Although Venice once ruled a considerable area, its expansion occurred primarily because of Venetian wealth rather than the lands being the source of Venetian wealth, as tended to be true of British colonies. In an era that predated free trade policies, Venice’s territorial possessions, once obtained, did contribute to Venetian wealth.

Today’s Venice is but a shadow of yesteryear’s glory. The city’s population has shrunk by 50%, from 120,000 to 60,000 people, in the last thirty years. The decrease from its peak is even greater. Tourists (more than ten million annually) and sales of items made from hand blown Murano glass are now Venice’s economic mainstays. Parts of buildings appear empty and many (sometimes half or more on a street) former commercial enterprises (think small businesses such as bars and specialty shops that typically employ 1-5 people) are now shuttered.

A few beggars are on the streets, but I have not seen anyone sleeping rough. A handful of North African street vendors hawk handbags that look like luxury brand knock offs. As a tourist who does not speak Italian, it’s hard to estimate the unemployment rate. I have not seen clusters of people standing idly during the work day, an indicator of unemployment I’ve witnessed elsewhere, including in the US and the UK. Perhaps the high cost of living in Venice prompts the unemployed to move elsewhere.

Most buildings are in visibly need of repair, a function of both insufficient funds and the city slowing sinking into the lagoon, cracking and eroding stone and stucco structures. Streets are paved in stone; in general the trash is routinely collected and there is little litter apart from cigarette butts. I’ve seen a couple of closed bridges over small canals, presumably in need of repairs. I’ve noticed a few public employees working on Venice’s infrastructure and about a dozen crews working on what appear to be private property, but the observable backlog of work dwarfs these scattered efforts, which is unsurprising in view of the sad state of Italy’s economy. (Incidentally, one of the best restaurants at which I’ve eaten in Venice is a favorite of laborers in orange jumpsuits.)

What is striking is the number of what appear to be truly small commercial enterprises, e.g., a fruit shop with one or two staffers or a bar with three or four employees. Customers and proprietors greet one another warmly and frequent the same shops. Some of wealthy individuals may own multiple enterprises, but the appearance is consistently one of small, locally owned enterprises. Without the availability of large parcels of land to permit construction of a large store and a parking lot for the vehicles of customers drawn from a large area, Venice is commercially anomalous.

However, the plethora of small shops prompted some musings about commercial futures elsewhere. The English town in which I spent a week this month is known for its local shops (butcher, cheese, baker, clothing, etc.) that successfully coexist alongside national (e.g., Tesco, a large British supermarket chain) and international (e.g., Orvitz, a clothing and outdoors store) businesses. Farmer’s markets are increasingly popular; people choose to buy locally produced goods to support their neighbors, to minimize environmental costs, and to find top quality, hand produced or fabricated goods. If people do decide that quality living is possible with less stuff (cf. my post Post-industrialization), then the fate of truly small, local businesses may not be as dismal as economists and business analysts once predicted.

Having known small business people and farmers from my ministry in small towns in the 1970s, many of them cherished the quality of life and independence that self-employment made possible. Perhaps we will go back to the future. Some human endeavors (e.g., some scientific research and production of some consumer goods) happen most effectively and efficiently on a larger scale. But that is not true of all human endeavors and perhaps the industrialization of production epitomized by Henry Ford’s launch of the Model T automobile that over-reached at the end of the last century is beginning to ebb.

Satisfying jobs provide the worker with a decent standard of living, honor the worker’s dignity (this can happen by allowing individual expression, initiative, or other ways), permit – perhaps even encourage – the development of meaningful relationships, and contribute directly or indirectly to the community’s general well-being. Individuals opt for various combinations of those diverse rewards depending upon personal preference and need; choices may vary over time. Work that is not broadly consistent with those principles is not conducive to abundant living or personal happiness.

Venice, capitalizing on its heritage, seems to have achieved this for a significant proportion of its diminished population. Otherwise, crime would be higher, people living in poverty more readily apparent, and less interaction between people.

Friday, September 28, 2012

Venetian museums and government bureaucracy

Spending a month in Venice, Italy, has afforded me opportunity to visit the city’s great museums. The museums here differ from those I have visited in most other countries. First, there are no security screenings and the few guards consistently display an apparently relaxed attitude, sometimes openly napping while on duty. Second, the museums presume that their visitors will behave appropriately. Visitors, if they chose, could touch most of the art and other items on display. Similarly, no protective measures prevent a person from jumping off upper level outdoor areas or from upper story windows. These observations prompted some musings about government bureaucracies.

In the United States, the pervasive government attitude is zero tolerance of fraud (use for private purposes), waste (less than optimal efficient and effective use), and abuse (misuse that is neither fraud nor wasteful, such as intimidation of the public or employees) of public resources. Even the appearance of waste, fraud, or abuse of government resources is verboten. Ironically, this policy actually promotes extensive waste. New incidents of misuse frequently trigger new countermeasures to prevent the problem’s recurrence; the net cost of the preventive measures often far exceeds any potential loss to fraud, waste, and abuse.

Venice has a low crime rate and probably does not have places that terrorists deem high-value targets. The vast majority of tourists who visit the museums do so to appreciate the art and want the art to be preserved. The guards maintain a visible presence and gently instruct tourists who miss a sign or do not understand it about the rules (e.g., no flash photography or sitting on certain pieces of furniture). I’ve visited ten major museums here and not seen any damaged art, excepting a couple of pieces damaged by well-intentioned but inept professional art restorers. Even with a lack of railings, protective glass, etc., and the paucity of guards, visitors treat the art and displayed items respectfully.

If one reasonably presumes that an occasional unintentional act or intentional incident of vandalism damages a precious object, the Venetians seem to make much better use of government and institutional resources than is the norm in the United States. Given the poor condition of the Italian economy, zero tolerance for fraud, waste, and abuse would necessitate closing most museums or charging such an exorbitantly high admission fee that few people would visit.

Zero tolerance often connotes bad ethics. When life or limb literally depends upon zero tolerance for errors, then redundant systems to ensure no mistakes can be appropriate. Performing critical airplane repairs correctly and ensuring that a surgeon operates on the proper body part are examples of when zero tolerance can make sense. But many times, in societies increasingly dependent upon automation, zero tolerance for errors reflects shoddy thinking (achieve zero tolerance because we theoretically can), incorporates excessive cost into the effort, unnecessarily restricts human freedom, and unhelpfully diminishes personal responsibility.

Much government regulation falls into this category, a reflection of our lack of trust in government and one another:

The crippling effects of lack of trust become especially disturbing when we consider that the United States has been on a downward slope of trust since the 1960s, when 58 percent of Americans said that they trust others. Today that number is 34 percent. (Paul J. Zak, The Moral Molecule: The Source of Love and Prosperity, p. 176)

Life is messy. That is one of the morals of the story of Adam and Eve’s expulsion from the Garden of Eden. Zero tolerance implies intolerance for the messiness that results from human ineptitude, human selfishness, and factors beyond one’s control (aka chance). The Venetian attitude toward protecting their city’s cherished art and heritage offers a helpful corrective to people and communities frustrated by bureaucratic bloat.

Wednesday, September 26, 2012

Afghanistan update

 So long, pal” provides an update on the war in Afghanistan (The Economist, September 22, 2012). In the wake of repeated, high-profile attacks by Afghan forces on their NATO counterparts, the senior NATO commander in Afghanistan, General John Allen, USMC, has suspended most joint NATO-Afghan patrols. Now, all such patrols require approval from a general officer. Concurrently, new efforts are underway to improve training and recruitment for Afghan units. The article is worth reading and contains much good analysis.

However, the article fails to address three fundamental problems with that war.

First, military patrols are generally ineffective in establishing government control over territory. Patrols invite attack, thereby allowing a patrol to engage insurgents. But even under optimal conditions, there are too few patrols and too much ground to permit patrolling to become an effective means of exerting military control over an area. Counterinsurgency doctrine calls for ratios of 1 to 20, that is, 1 military/police for every 20 citizens. Whether that ratio is sufficient in a nation with as scattered a population and rugged terrain as Afghanistan is unknown. Even the most optimistic forecasts do not envision Afghanistan forces reaching that ratio. Patrolling failed as a tactic in Vietnam, Iraq, and elsewhere. There is no reason to believe that it will work in Afghanistan.

Second, language and cultural barriers further impede the effectiveness of patrols as well as pose major obstacles to establishing real trust and genuine cooperation between NATO personnel and Afghan forces. Without substantial linguistic skills, the only way a patrol can distinguish between friend and foe is on the basis of hostile actions. The story of our occupation of Afghanistan is a long narrative of misunderstood locals, situations wrongly perceived as hostile or as friendly because foreigners did not know the language and the culture. This is not an effective way to make friends.

Third, even if one overcame both of those problems, Afghanistan has a corrupt central government that exerts little influence in most areas of the country. Local powers govern Afghanistan. Pretending otherwise does not change the actual situation.

I want uniformed military leaders who believe that victory is possible (remember John Paul Jones who in the face near certain defeat declared, “I have not yet begun to fight”). But, I also want those uniformed leaders subordinate to elected civilian leaders who have less invested in the fighting and more responsibility for the larger view.

An important criterion for a just war is that the war has a reasonable chance of success, i.e., moving toward real peace. The fighting in Afghanistan fails this test. If success were possible, surely we could have achieved it in less than a decade. Furthermore, if success were possible, then our elected leaders would have no reason to conduct the war with minimal Congressional scrutiny, e.g., using deficit financing to pay for the war. NATO and the U.S. do not have vital interests at stake in this war, something which public scrutiny can highlight. We cannot stop Afghans from killing one another. We can stop the pointless killing of Afghans by foreign military personnel and the equally pointless NATO and U.S. military casualties in Afghanistan.

Monday, September 24, 2012

Victim mentality

Christopher Coker in The Warrior Ethos (London, UK: Routledge, 2007, p. 102) wrote, “Indeed, our predisposition to regard soldiers as victims encourages them to exaggerate their own vulnerability to emotional and psychological stress. Warriors, like the rest of us, are only human.”

I do not know to what extent the apparent – apparent because of insufficient and imprecise reporting in previous wars – increase in the number of U.S. armed forces personnel returning from Afghanistan or Iraq suffering from some form or degree of post-traumatic stress disorder (PTSD) is a function of better reporting or our society encouraging the military to regard themselves as victims. However, American society increasingly encourages all of its members to view themselves as potential victims with multiple vulnerabilities. The litigiousness of Americans is a similar, and also unfortunate, expression of this developing victim mentality.

Before I progress further, let me hasten to add an essential caveat. I am not advocating that people adopt an artificial façade of invulnerability that admits no possibility of psychological or emotional injury. Such a façade is entirely false and impossible to maintain in all circumstances. Such a façade also tends to form cold, unfeeling personalities that preclude an individual experiencing the fullness and happiness that are intrinsic to living abundantly.

Ironically, a naïve, widely held, and superficial version of this false façade, and therefore one that exacerbates rather than minimizes victimization, presupposes that being a twenty-first century American means that one should be immune from all potential threats and harms. This implicitly abdicates accepting personal responsibility when things go wrong, as evident in the successful lawsuit against McDonalds by a person who spilled hot coffee when driving a vehicle with a cup of McDonalds’ coffee sitting between their legs.

Yet sometimes we are genuine victims. The story is told of a Jew standing by a rabbi in one of the concentration camps, compulsorily watching a fellow inmate being flogged to death. He asked the rabbi why it always seemed to be the lot of the Jew to be the victim of such persecution. The rabbi replied: “In this camp, where you can only be a victim or a perpetrator, we should be proud we are the victims.” That story emphasizes it is better not to be a victim but that sometimes being a victim is unavoidable.

That creative tension should characterize all of us, but especially warriors and others who frequently go into harm’s way or who routinely face trauma, e.g., first responders and emergency room workers. People in all of these situations need the strength and resilience that comes from imagining one to be invulnerable. But they also need the watchful attention of a caring community to discern when that strength and resilience have failed and the person has become a victim.

Warriors returning from previous wars did so with explicit medical intervention. This was especially difficult for Vietnam era vets who one day would be facing real threats in Vietnam and the next day be home in the States. Veterans from other wars had typically had long journeys home, permitting them ample time to tell stories to one another, storytelling that facilitated decompressing and healing.

U.S. warriors returning from Afghanistan and Iraq typically participate in a series of interviews and group sessions designed to encourage storytelling for decompression and healing and to identify those individuals with more serious problems. Practitioners (mental health professionals and chaplains, among others) must be careful not to create the impression that everybody suffers from post-combat trauma. Psychological injuries are real and widespread but only a minority suffers from them.

Contrary to popular opinion, suffering is compatible with the abundant life:

"In Christianity's second millennium, Jesus as an abused and innocent victim, hanging dead on the cross, would become the image of holiness. But for a time - for nearly a thousand years - Christianity offered a different image of sanctity: the glory of God was humanity fully alive." (Rita Nakashima Brock and Rebecca Ann Parker, Saving Paradise: How Christianity Traded Love of this World for Crucifixion and Empire, p. 202)

The rabbi in the concentration camp who counseled his fellow prisoner about being victims, like the suffering Jesus portrayed in the gospels, exemplifies making the most of one’s situation, living as abundantly as circumstances permit in spite of suffering over which one has no control. They rejected a victim mentality – until it became the path of living more abundantly. This is true happiness, human flourishing rooted in being fully alive rather than transient and fleeting pleasures.

Saturday, September 22, 2012

Was Jesus married?

A professor at Harvard Divinity School, Karen King, has translated a small (about 1.5 by 3 inches) fragment of parchment which includes, in Coptic, the language used by Egyptian Christians this partial sentence: “Jesus said to them, My wife…” The word used for wife is unambiguous and has only the one meaning.

Was Jesus married?

King’s article, summarized here, carefully states her conclusions:

·         The fragment appears genuine, though tests continue;

·         The fragment is the earliest evidence of Jesus having a wife;

·         The fragment, from the fourth century, is too far removed from Jesus to have any value as historical evidence;

·         Nevertheless, the fragment does support evidence showing that early Christians debated the proper role of sex in marriage.

Previous claims that Jesus was married, of which Dan Brown’s The Da Vinci Code is the most notorious though a work of fiction lacking scholarly foundations, have ignited major ecclesial and theological controversies:

·         If Jesus was fully divine and fully human, why would he marry?

·         Would any children from such a union be partially or fully divine, i.e., in what way would a child from that union share in the ontological identity of each parent?

·         Are Jesus’ descendants (if there were any) the rightful head of the Church on earth (the Shiites make this claim regarding the descendants of Mohammed as the rightful leaders of Islam, whereas the Sunnis believe that Islamic leadership belongs to the most capable/devout)?

·         Did Jesus abandon his wife when he began his public ministry?

·         Conversely, how many first century Jewish males living in rural Palestine did not marry?

·         Was Jesus gay (this would explain him not having a wife, but not the affection that he seems to have had for Mary Magdalene, especially according to non-canonical sources)?

As the last of those questions illustrates, the questions can become increasingly far reaching. The truth is that nobody has any data from which to draw any conclusions, no matter how highly tentative, about Jesus’ marital status. Arguments based on an absence of information are worthless in this and all other instances. Professor King, an excellent scholar, is careful to make this point.

Personally, I do not find the idea of Jesus having a wife problematic. The gospels note that Jesus, while dying on the cross, made provision for his mother. There is no reason to suppose that he could not then, or previously, have made provision for his spouse and any children. Culturally, marriage was normative for first century Jewish males. If nothing else, imagining a married Jesus will generally give one a more human image of Jesus, a healthy antidote to the excessively divine image of Jesus found in much art and theology.

Theological issues linked to Jesus’ ontological status – fully human and fully divine according to orthodox Christianity – are not necessarily obstacles to Jesus having had a wife – if one interprets orthodoxy metaphorically or mythically rather than literally. Recent textual studies by Bart Ehrman (e.g., Misquoting Jesus and Jesus Interrupted) and other scholars add support to rejecting a literal interpretation.

Was Jesus married? I don’t know. I do know Jesus was fully human and a man of his time and place. Authors like Dan Brown tell entertaining stories; scholars like Karen King offer careful analysis of sparse data; historians widely agree that more evidence of Jesus having been a real person exists than does for most other historical figures. Ultimately, therefore, individuals must decide who Jesus was and what, if any, significance he might have for them or for life in the twenty-first century. In any case, his marital status contributes nothing to contemporary debates about marriage.

Wednesday, September 19, 2012

Images of God

In my recent Ethical Musings post Arguing against capital punishment, I posited a dichotomy between individuals who have an authoritarian image of God being more likely to support the death penalty and individuals who have a benevolent image of God being less likely to support it. In response, a reader emailed me this comment:

Seems to me such a dichotomy has profound implications and helps to explain raging debates on other topics within and across religions and denominations: evangelical Protestantism vis-á-vis liberal, Orthodox Judaism vis-á-vis Reform, and Sunni Islam vis-á-vis Shia. Perhaps this dichotomy is even more significant that religious affiliation itself.

My correspondent’s examples cloud his point. For example, Sunni Islam has historically adopted a broad, generous approach to diversity that suggests a benevolent God. However, the Wahhabi sect of Sunni Islam and other linked groups (e.g., the Deobandi in Pakistan and al Qaeda (an Islamist terrorist organization and not a Muslim sect)) have a narrow, authoritarian image of God that sharply contrasts with historic Sunni Islam. Ultra-orthodox Judaism offers a sharper contrast to Reform Judaism than do most expressions of Orthodox Judaism. Similar diversity exists within both liberal and evangelical Protestantism, although liberals tend more toward a benevolent image of God than do evangelicals.

As with any stereotype, exceptions exist. However, a stereotype’s power derives from the insights facilitated by its broad description rather than the accurate characterization of each particular instance. That caveat warns against judging individuals based upon stereotypes. Conversely, ignoring the analytical power of stereotypes impoverishes insight and retards progress.

The dichotomy between benevolent and authoritarian images of God (which, in fact, emphasizes two extremes between which lies a spectrum of gradations) perhaps transcends particular religious affiliation because it reflects a person’s personality or psyche more than anything else. Obviously, environment and genetics help to form that personality or psyche.

If one accepts that God is one, and that only one God exists, then substantial and radical differences in the image of God have little to do with God and much to do with religion, culture, and personality.

The world is both benevolent and cruel. For example, without the world life would be impossible. Reciprocal altruism and human social gregariousness both point to human interdependence. Our capacity to love and to be loved is one of the elements of the human spirit that distinguishes humans, at least in degree, from other species. All of this suggests benevolence. But life is finite. Species exist in competition with one another. Even within Homo sapiens, competition exists in tension with the need for interdependence. We necessarily treat some species as food sources – even vegans do this. Ergo, one can easily perceive the world as a cruel place.

One’s dominant view of the world as benevolent or cruel to some extent shapes that person’s image of God. For atheists and agnostics, my choice of terms – benevolent and cruel – will often be problematic because the terms imply a value judgment they are unwilling to make. The world simply is; no reasonable basis exists for imposing a value laden adjective. Yet my sense from conversing with atheists and agnostics over the years is that most of them approach life with an optimism or pessimism that is strikingly similar to characterizing the world as either a benevolent or cruel place.

These musings prompted two final thoughts:

1.    Nobody knows the future, so prepare for the worst and hope for the best, a policy that incorporates the wisdom of pessimism with that of optimism, i.e., bad things do happen, adequate anticipation can mitigate or even minimize negative consequences, but hope is essential for living abundantly and joyously.

2.    God is more likely benevolent than cruel. The world’s major religions all associate benevolence or bliss with the ultimate. Seeking a benevolent image of God helps to open the windows of one’s life so that God’s light may illuminate one’s life and path more fully.

Monday, September 17, 2012


Recently, I visited Iron Bridge in Shropshire, England. Located in a steep-sided valley (or gorge, as the British call it) through which the Severn River runs, the gorge is a UNESCO World Heritage site because the Industrial Revolution began there. A Quaker named Abraham Darby developed a cost effective method for making cast iron at the beginning of the eighteenth century using coke. The gorge had exposed seams of iron ore and coal, and had long been a center for industrial activity. But Darby’s inventiveness, and that of those associated with his enterprise, developed the technologies and incidentally the mass production that marked the beginning of the Industrial Revolution, enriching the lives of the masses with a wealth of affordable material items.

More broadly, take a moment to catalogue, mentally and informally, just a few of the labor saving devices that make your life so much easier and better than the lives of people three hundred years ago. My list includes a wide array of kitchen appliances, clothes washers and dryers, vacuums, TVs, etc. Of course, the list extends beyond one’s home to transport, factories, etc. Many of the items on my list post-date the Industrial Revolution itself, but are nonetheless unforeseen results of what happened at Iron Bridge. Lest you have remaining doubts, visiting the late nineteenth century Victorian village houses and shops at Iron Bridge will emphasize how much easier and richer life is for the majority of Americans and Europeans in the twenty-first century than it was in previous generations. And the Victorians lived more than a century after the Industrial Revolution began!

My visit prompted some ruminating about how much stuff a person actually needs. Mentally consider each room of your house. What is it you do there? What stuff do you need to do it? The Information Age is making content (music, books, and images) available in digital format. Digital primarily requires silicon and electricity, with increasingly small devices for storage and playback. To what extent can digitalization and miniaturization reduce the human ecological footprint while simultaneously improving our quality of life?

Quality clothes, household goods items, and furniture may last longer (a thought triggered by an exhibition of the cast iron goods industrial Britain produced) and therefore require fewer limited resources than the disposable items (e.g., trendy clothes, non-repairable appliances, and paper goods) with which we currently fill our houses.

In other words, I began to wonder: can we retain the quality of life improvements that the Industrial Age ushered in while reducing our ecological footprint through better, more ecologically attuned living? One vital element in answering that question affirmatively will be to identify ample, affordable, and sustainable sources of energy. Fossil fuels fail those tests. Many renewable sources of energy (e.g., wind) also currently fail those tests. Solar, fusion, or another source may, in time, meet those standards.

Most British during the Victorian era worked six fifteen hour days every week. Those in service (i.e., employed by the wealthy as servants or staff) generally worked six and a half days. Very few people (mostly the idle rich) enjoyed a life of leisure.

Today, people in industrialized nations typically think in terms of a 35 to 40 hour workweek. Managers and professionals often work more; the unemployed, retired, and some of the underemployed work less. But overall, the workweek has substantially declined over the last one hundred and fifty years.

Acquiring less stuff, stuff of better quality designed to last longer, will probably further reduce the workweek. Additional technological advances that improve productivity and produce goods using fewer resources will also reduce labor requirements. This can create bifurcated cultures, divided between the employed and unemployed. Or, this can, through reduced workweek and earlier retirements free people for pursuits that include continuing education, creation/appreciation of the arts, self-care (e.g., exercise and prayer/meditation) – activities that make a person more fully human and for which no machine or electronic device can substitute.

The Darbys, for five generations, lived in houses that overlooked Iron Bridge. Initially, the smoke and pollution of the works signified progress; the river and surrounding countryside effective mitigated the pollution. Then, being on site became convenient – a means to ensure proper oversight and management, with suffering from the pollution as a cost of doing business. Finally, the family let go of the enterprise, entrusting its management to others. Production and business moved to other sites in England; the family likewise relocated to, literally, greener pastures. Today, people again swim (those willing to brave the cold!) and eat fish caught in the Severn. Progress is possible.

Saturday, September 15, 2012

Disruptive events – part 2

The al Qaeda attacks on 9/11 exemplify a disruptive event experienced concurrently by many people, yet with a variety of responses. Sadly, a great number of Americans saw 9/11 as a defining moment of their lives. I discussed some of those uncreative, unproductive perceptions of 9/11 in my post on the 10th anniversary of 9/11.

The widespread American responses of fear, perhaps even panic, in the aftermath of 9/11, do illustrate the transformative social power of certain disruptive events. The disciples’ response to Jesus’ death and resurrection similarly illustrate the transformative social power of a disruptive event. In both instances, what began as an event in the lives of individuals (a relative handful of people with respect to Jesus, a few thousand on 9/11) became transformative in the lives of literally millions of people.

One essential element in a disruptive event moving from an individual (or small group) to a wide scale socially transformative event is for people to perceive that the disruptive event is germane to their life. In the case of Jesus, the disciples and their later converts believed that through the events of Jesus’ life and death individuals could enter into a unique relationship with God. In the case of 9/11, people believed that the world was no longer the same place as it was on September 10, 2001. This latter claim is patently false: the world had not changed but people’s perception of the world had changed. Perception, as happens so often, defined reality. Although fewer seem to understand the dynamics, the world does not change for people who encounter God through Jesus. God continuously embraces and is present to the whole world. What changed was that a person who had been unaware of the divine presence and love became of aware of that loving presence. The person’s perception, as with 9/11, had changed; the world had remained the same.

A second essential element in a disruptive event moving from an individual to a large scale transformative moment is for the initial experience(s) to resonate deeply among a wider audience. First century people (and millions since then) have had a spiritual hunger sated through living into the Jesus story. Analogously, millions of twenty-first century Americans recognized the fragility of life, shattering their illusion of invulnerability.

Socially disruptive events may occur on a mass scale (e.g., an earthquake), to an individual (e.g., Paul on the Damascus road), or to a group whose size lies between those two extremes (e.g., 9/11). The scale of the disruptive event does not invariably have a direct relationship with its transformative potential. Natural disasters often effect masses but may prove transformative for few people. Most disruptive events that occur in an individual’s life or to a small group have the potential to transform few other lives.

A disruptive event, whatever its scale, resonates deeply in the life of a person uninvolved in the event when that person recognizes the potential for such an event occurring in her/his life, identifies with those directly affected, and vicariously joins the ranks of those affected. Christian theologians typically reverse the terms when referring to Jesus death and resurrection as vicarious events. It is not that he identified with us but that we identify with him that makes his death and resurrection transformative.

Thirdly and finally, the transformative effect of a disruptive event may be constructive (life giving) or destructive (life destroying). 9/11, for the vast majority of people, was destructive: fear overwhelmed any pre-existing confidence or trust in God’s gifts of abundant life, happiness, and trust. Conversely, Jesus’ death led people to experience the mystery of God’s gifts of abundant life, happiness and trust more deeply and fully.

Disruptive events, by their very nature, are beyond a person’s ability to control. Disruptive events occur sporadically and unpredictably in all of our lives. How do we shape our response such that it leads to constructive rather than destructive transformation?

Transformation happens in one of three ways. First, a person may decide to change. In this instance, a disruptive event serves as the catalyst for initiating and perhaps facilitating the change. Second, external events may change the person. A dramatic example of this is a disruptive event that causes a person to lose a limb or the discovery that one suffers from a major, perhaps incurable, disease. The circumstances forever alter one’s life. Less obvious but no less real are the multiple ways in which events over which one has no control changes one’s life (e.g., thinking, predispositions, or decisions) in ways over which one has no control, regardless of any illusions to the contrary. Humans, after all, at best enjoy limited autonomy; most of what we do and who we are is a function of genetics, environment, and experience over which we have no control. Third, transformation may result from a combination of personal choice and externally determined factors.

Improving one’s self-awareness (one of the six elements of the human spirit) can aid transformation by increasing the opportunity to respond creatively and lovingly to disruptive events. Improving self-awareness also can deepen one’s relationship with God by opening wider the windows in one’s life through which the divine light can shine more fully. No human can control God’s actions but humans, through openness and attentiveness, can increasingly become aware of who God is and what God is doing.

Imagine how life in the United States, and the world, might have been different had the U.S. and more individuals tried to respond creatively and constructively to the disruptive events of 9/11. Neither the war in Afghanistan nor Iraq would have occurred, saving tens of thousands of lives and more than $1 trillion (entirely deficit financed). The Transportation Security Administration would not exist (this agency provides an illusory façade of security; genuine security depends upon people willing taking responsibility for their lives, as happened on 9/11 aboard United Airlines Flight 93. Millions of people would not have lost more than $100 billion in the panic that followed 9/11; our economy would have continued largely uninterrupted, denying victory to the criminals who perpetrated the attacks. We would have built bridges to Muslims – a religion that teaches peace through submission to God. The events of 9/11 would have occasioned long-term spiritual renewal rather than the very short-term spike in worship attendance that occurred.

Even when a human has little control over how a disruptive event changes their life, humans can develop a remarkable degree of control over their emotions. A person can opt to reject (not to deny or suppress!) the initial, perhaps involuntary emotional response and to substitute a different emotional response. This idea is basic to anger management classes and some forms of therapy. I’ve repeatedly helped individuals skeptical about their ability to control their emotions develop that control and change their lives. If nothing else, succumbing to terror cedes a huge psychological victory to terrorists. Conversely, refusing to be terrorized preserves one’s dignity, avoids becoming an emotional victim, and denies the most critical element of success to terrorists, i.e., instilling terror in civilians.

Wednesday, September 12, 2012

Disruptive events

Disruptive events – a major illness, a death of a loved one, a divorce, a big change in employment, etc. – are events over which a person has no control and that are a catalyst for change – good or bad – in a person’s life. Disruptive events always change us, affording an opportunity for growth or disaster.

Some disruptive events are personal (e.g., illness) and others are environmental (an earthquake) or contextual (an employer’s restructuring). Lacking control over disruptive events, we tend to associate negative outcomes with disruptive events (a tendency that a widespread preference for inertia or the status quo reinforces), worry about possible disruptive events, and feel helpless in the face of them.

Not all disruptive events are negative. The birth of a child, marriage, or a big promotion may all constitute disruptive events. Hopefully, all of those examples also promise positive outcomes. As much as we may prefer things to remain static, change is pervasive. Our physical bodies, for example, are largely new every seven years through a gradual process of cell replacement. The process is relatively transparent because of its constancy; only as we age do we generally realize that we have gradually become a different person over the passing years.

The possible disruptive events over which we expend the most energy worrying rarely occur. As an experiment, try listing the major disruptive events that you have most feared over the last year. How many actually happened to you? One reason for this is disparity may be that we avoid, in ways that we do not understand, some of these disruptive events by taking precautions or other steps to avoid them, a process that we may not even comprehend. Another reason for this disparity may be that we have a more pessimistic outlook on life than is statistically justifiable. Yet another reason for the disparity is that most disruptive events are inherently unpredictable, and therefore many of them are not among the bad things about which we worry. Incidentally, worry does not add to length or quality of one’s life.

When a disruptive event does occur, I spend some time reflecting about what I can and cannot control. Items in the latter category I try to accept as givens, acknowledging that I can do nothing but accept them. Focusing on the items over which I can exert some control allows me to restore my sense of independence and self-respect (essential elements, at least for me, of my sense of dignity and worth). Furthermore, focusing on those items over which I may have some influence helps me to make the best of the situation, working to convert negative disruptive events into at least some form of limited opportunity for good.

Jesus’ arrest and crucifixion were disruptive events that changed his life – and our lives, even if we are not Christian because the world is a very different place than it would be had the Romans not arrested and executed him. His disciples experienced those disruptive events, and through their belief in his resurrection (in whatever way one understands this), found themselves changed, setting in motion events that changed the Roman Empire and the course of history. I do not think the disciples would have described those disruptive events or their aftermath as fun. But out of disruptions they discovered new and abundant lives that enabled them to do amazing things.

In my own ministry to the seriously ill and dying, I have regularly witnessed people who are living through agonizing disruptive events discover new and more abundant life. Conversations with other clergy, especially hospital and hospice chaplains, are full of similar stories.

If disruptive events are unavoidable, unpredictable, and offer opportunities for transformative experiences, are you ready?

Monday, September 10, 2012

Arguing against capital punishment

The Chief Justice of the U.S. Supreme Court, John Roberts, in his majority opinion in Baze v. Rees, No. 07-5439, the 2008 Kentucky death penalty case challenging the constitutionality of execution by lethal injustice, wrote:

Simply because an execution method may result in pain, either by accident or as an inescapable consequence of death, does not establish the sort of ‘objectively intolerable risk of harm’ that qualifies as cruel and unusual [under the Eighth Amendment’s prohibition against cruel and unusual punishment].

A premise underlying Roberts’ comment – that the death penalty is not a kind, gentle act – seems commonsensical to me. Unfortunately, modern culture often lacks an adequate supply of the precious commodity we call commonsense. Why would anyone think that capital punishment, however administered, is not painful?


Societies impose the death penalty on convicted criminals for three reasons. First, a society may intend the death penalty to deter people from committing crime. Deterrence obviously proved ineffective with respect to the criminal justly convicted of a crime. Both death penalty proponents and opponents point to research that supposedly supports their argument that the death penalty deters, or does not deter, crime. From my ethical perspective, the research is irrelevant. My ethical problem with justifying the execution of one individual to deter other persons from committing crimes is that this reduces the one executed to a means to an end, thereby denying that person’s inherent dignity and worth as a child of God. Christians should never view a person as simply an instrument for achieving a goal, no matter how laudable the goal. The gospel of Luke’s account of the crucifixion portrays Jesus assuring one of the criminals crucified with Jesus that the two of them, that very day, will be together in Paradise (23:39-43). Jesus clearly regarded the criminals crucified with him, who both acknowledged their guilt, as persons worthy of dignity and respect in spite of their crimes. In Luke’s narrative, one criminal experiences transformation, the other does not.


Admittedly, Scripture’s witness on the issue of deterrence, like the research on deterrence, is inconsistent. Some Biblical passages recognize the value of deterrence:

  • “Stone them to death for trying to turn you away from the Lord your God, who brought you out of the land of Egypt, out of the house of slavery. Then all Israel shall hear and be afraid, and never again do any such wickedness.” - Deuteronomy 13:10-11
  • “All the people will hear and be afraid, and will not act presumptuously again.” - Deuteronomy 17:13
  • “The rest shall hear and be afraid, and a crime such as this shall never again be committed among you.” - Deuteronomy 19:20

Other passages suggest that retribution belongs to God, undercutting the rationale for deterrence:

  • “You shall not take vengeance or bear a grudge against any of your people…” - Leviticus 19:18
  •  “Beloved, never avenge yourselves, but leave room for the wrath of God; for it is written, ‘Vengeance is mine, I will repay, says the Lord.’” - Romans 12:19
  • For we know the one who said, ‘Vengeance is mine, I will repay.’ And again, ‘The Lord will judge his people.’” - Hebrews 10:30

I discuss retribution, the third rationale for the death penalty, below. Suffice it to say, the Deuteronomic passages supporting deterrence reflect a more rigid legalism and less robust understanding of personhood than I find in Leviticus and the New Testament. These latter passages point to a developing awareness of the demands of loving as God loves. Not surprisingly, the Baylor Institute of Religion survey, American Piety in the 21st Century, published in September 2006, confirmed that individuals who have an authoritarian image of God are more likely to support the death penalty than individuals who have a benevolent image of God.


Second, society may impose the death penalty intending to prevent a person convicted of a serious crime from further harming anyone else. As a Christian, I have two ethical problems with this rationale. Capital punishment is a final solution that allows no second chance. What if new evidence becomes available that the person executed was in fact innocent? Worse yet, what if the executed person is innocent but nobody ever finds the exculpatory evidence? At least in the first instance, society can release and compensate the convicted person discovered to be innocent. No evidentiary standard, no matter how high it is set, can guarantee that absolutely everyone given the death penalty is in fact guilty.


Even more morally troubling to me, the death penalty makes a large number of people – legislators, police, judges, lawyers, jurors, prison officials – complicit in the death of each person executed. William J. Wiseman, Jr., was a member of the Oklahoma State House of Representatives from 1974 to 1980. He admits that for six years his highest priority, like that of every legislator he has ever known, was retaining his seat. Everything else was in a different category of regard and concern. Philadelphia Quakers had educated Wiseman and he opposed the death penalty. He believed that at best it was unjustified and at worst was immoral.


When a bill came before the legislature to re-write Oklahoma’s death penalty law, Wiseman found himself in a difficult position. Ninety percent of his district, as measured by a poll that he had commissioned, supported the death penalty. He was afraid that if he voted against the death penalty he would not be re-elected. Wiseman attempted to rationalize supporting the death penalty by seeking a more humane means of execution. Working with the state medical examiner, who sought out Wiseman after learning of Wiseman’s quest for a more humane method of execution, they drafted what became the nation’s first legislation authorizing capital punishment by lethal injection. Over thirty states have copied that groundbreaking legislation.


Today, William Wiseman lives with the knowledge, the guilt, that he is morally responsible for the execution of many criminals. He sacrificed his principles for political expediency. (William J. Wiseman, “Inventing lethal injection,” The Christian Century, 20-27 June 2001, pp. 6-7) I do not believe that I have the moral right to ask others to kill another person to prevent that person from committing additional crimes when at least one viable alternative exists, e.g., life in prison without parole. This belief mirrors Christian Just War Theory, which requires any potential war to satisfy a number of criteria, one of which is that war is truly the last resort, before waging war with the attendant use of lethal force is morally justifiable.


Third, society may impose the death penalty as retribution against the criminal for the crime committed. The gospels report in several places that Jesus taught his disciples, “Love your neighbor as yourself” (Matthew 5:43; 19:19; 22:39; Mark 12:31). Jesus’ teaching echoes the Torah (Leviticus 19:18) and the New Testament repeats it several times (Romans 13:9; Galatians 5:14; James 2:8). Pretending that Jesus thought that anyone involved in imposing the death sentence on him or in executing him acted out of love for him mocks the brutally cruel reality of his crucifixion. Similarly, no amount of thought or imagining allows me to construe legally executing a convicted criminal as loving that person.


Some death penalty proponents argue that executing the guilty individual somehow expiates, atones for, makes amends, or compensate the victim or victim’s loved ones. Executing the guilty, from this perspective, becomes an act of justice, if not love, for the victim or victim’s loved ones. This entails, as with the first rationale for the death penalty, reducing the executed to a means to an end. In other words, the way to set the first wrong – the crime(s) that led to the imposition of the death penalty – right is a second wrong – the dehumanization of the criminal. Two wrongs never make a right.


Capital punishment is obviously painful. Its principal pain stems not from the method of execution, no matter how agonizing. Prematurely extinguishing a human life causes the real anguish of capital punishment. The executed criminal experiences that pain most intensely. The rest of us are diminished by the loss of a brother or sister and because we ourselves become a little less human every time our society executes one of its members. The time has come to declare loudly, emphatically, and decisively through our political process that capital punishment is inimical with whom we believe God has called us to become. Capital punishment should end, regardless of constitutional issues, because capital punishment is morally wrong.

Saturday, September 8, 2012

False messiahs

Moon Sun-myung, the self-proclaimed messiah whom Jesus had asked to complete Jesus’ unfinished mission on earth, died on September 3, 2012 at the age of 2012. Moon and his followers (aka “Moonies”) built the Unification Church during the heyday of Christianity’s explosive twentieth century growth in Korea and a globe-straddling business empire that includes seafood distribution, media, arms manufacturing, and real estate holdings. His Church claims 5-7 million members worldwide but some ex-members and critics believe that 100,000 is a better estimate. Moon spent almost three years in a North Korean hard labor camp before U.N. troops liberated the facility in 1950 and thirteen months of an eighteen month sentence in a U.S. prison for tax evasion. Known for conducting mass blessings and marriage ceremonies in the 1970s and 1980s, his popularity had waned.

A conviction for income tax evasion seems incongruous with claims to be the Messiah. Jesus reminded people to pay to Caesar what was Caesar’s. Deceptive accounting bears more similarity to lying and theft (stealing from the government) than to evidence of being a messiah.

The word messiah denotes a leader or savior, especially of the Jewish nation. Jesus leads or points toward a path to life abundant; Moon appears to have led people along a path that benefited Moon and his family; they will inherit much of his business empire, presently intertwined with Unification Church holdings.

The sad saga of Moon and the thousands whom he duped and exploited highlights an issue with which many clergy struggle: building a community (or organization) that tries to center itself on God rather than the cleric. Personality cults tend to be narcissistic, exploitative, and short-lived. For example, the congregation that Robert Schuller built at the Crystal Cathedral but fell apart upon he retired and then filed for bankruptcy.

One of my college professors, William Geoghegan delighted in the phrase, the routinization of charisma. He contended that charisma (grace, an encounter of the divine) lay at the core of all religious experience. To preserve and to spread that charisma, the founder (e.g., Jesus) or followers (e.g., the disciples) inevitably formed an organization. Doing so forced the charisma to fit into structure and theological concepts that unintentionally and unavoidably destroyed the charisma they intended to preserve.

I have known relatively few people who chose to follow and affiliate with a false messiah like Moon. I have known many people affiliated with organizations in which rountinization has stifled the charisma that gave the group its original impetus and power.

In the post-modern twenty-first century many ecclesial structures have become, in essence, false messiahs, i.e., they have lost touch with their original charisma. No wonder that so many today find the church a turn-off, claiming to be spiritual but not religious.

On the other hand, charisma without some form of community or organization is bereft of much of its transformative power and has a very short shelf life. Instead of aiming to create an institution for the ages (permanent buildings, endowments, etc.), spiritual people may magnify their social influence and transformative potential by emphasizing community (relationships) and structures focused on the present.

Wednesday, September 5, 2012


Milestones mark the progress of a human life. Sometimes the milestone connotes the end or beginning of a chapter; other times, the milestone indicates a chapter division. Births, weddings, divorces, deaths, and major life changes are among the events and changes that often signify milestones.

Recently, I turned sixty. Birthdays that mark the end of a decade of life often have particular significance. Yet mine was far from traumatic. Having retired almost seven years ago, completing six decades of life did not signal the arrival or the approach of retirement. But it did prompt some pleasant reflections about the nature of life and meaning of happiness.

When I served on exchange with the Royal Navy in London for two years, people often said that Americans lived to work whereas they worked to live. I found that comment transformative. What do you want out of life? What do you want to contribute?

Economist Robert Sidelsky of the University of Warwick and philosopher Edward Sidelsky of the University of Exeter in How Much is Enough? contend that many people pursue the wrong goods, i.e., these people strive to attain things rather than to achieve a good life. The definition of the good life varies greatly from person to person. But, as I have repeatedly argued in this blog, more is not necessarily better. Jesus memorably illustrated this insight with his story of the affluent farmer who kept building new barns to store his ever-growing wealth, but who tragically died before enjoying that wealth.

The American economy is increasingly bifurcated into the haves and have-nots. The middle class, long America’s strength, shrinks with each passing year. The Skidelskys (father, the economist, and son, the philosopher) maintain that if the affluent could limit their desire for to accumulate wealth and things, they would benefit, as would the have-nots, because companies would then need to hire more people to perform the work that the highly driven, highly compensated now do. The Skidelskys correctly recognize that the problem is not a lack of talent but lack of economic opportunity.

Additionally, more expensive is not necessarily better. Money has a diminishing utility. Shoes are essential. Good shoes are functional and aesthetically pleasing. But good shoes do not have to cost thousands of dollars per pair. A pair priced at two thousand dollars (and I’ve seen lots of these in stores, especially in Paris) is unlikely to provide ten times as much satisfaction to the owner as a pair of two hundred dollar shoes. The same analysis applies to clothing, cars, houses, and most other purchases. The hidden cost of extravagantly expensive purchases is the ignored opportunity cost of time that could have been used in other ways.

With the passing years, I have noticed that my body is less capable and requires more recovery time. Thankfully, I do not suffer physical ailments but nonetheless experience the inexorable toll that time takes. The human body is an example of planned obsolescence. I am healthier and more fit than my parents were at this age but do recognize the onset of decline. Professional athletes generally retire before age forty because their bodies are less capable and require longer recovery times.

Planned obsolescence points to the inevitability of death and the importance of making the best of my remaining time, whether it is short or long. Steps to extend life (good diet, nutrition, exercise, sleep, preventive care, etc.) seem worthwhile, but are most effective when consistently practiced from an early age. I care for my body to live and to live well, rather than living to exercise, diet, etc. The body is a gift from God and, in Christian terms, the temple of the Holy Spirit.

Finally, the milestone of turning sixty prompted reflections about what I most cherish: relationships with those I love and who love me; my sense of self that emerges from my body, my self-awareness, and my linguistic capacity; my limited autonomy; and my creativity and aesthetic sense. In sum, these relationships express my spirit and connect me with others, self, and the world. Together, they point to my relationship with the One who is life itself.

What was your last milestone? What reflections, insights, or changes did it prompt?

Monday, September 3, 2012

Thoughts on grieving

Last week I attended a memorial service for a friend and former colleague. Sitting in the pew, participating in the liturgy, and reflecting on both the deceased’s and my grief, I wondered whether funerals and memorial services (and other customs that surround death) helpfully minister to the living. These grief rituals, with the possible exception of prayers for the deceased, can only the benefit the living, not the deceased.

Christianity has long taught that all people – the dead and the living – are connected. This claim is inherently unverifiable. But if one posits that God's love is deeper, broader, more far-reaching than any imaginable divide, then God's love reasonably embraces both the living and the dead. God's love may itself connect the living and the dead, or perhaps God, motivated by that unfathomable love, has created some other connection between the living and the dead.

In any case, prayers for the dead by the living certainly do not hurt the dead and may provide some benefit. At a minimum, praying for the dead affords God an opportunity to comfort the person who offers the prayer. Furthermore, praying for the dead presupposes no particular understanding of life after death and can even allow God to move in the life of a person who disbelieves in life after death.

More broadly, do our grief rituals actually benefit the grieving? Grief rituals and associated customs include: funeral, memorial, and interment/burial services; wakes or visitations; burial, cremation, casket, etc.; flowers, memorial donations, funds, etc.; and obituaries. Grief rituals have three possible benefits: comfort for the grieving, bringing closure to a chapter of one’s life, and movement toward beginning a new chapter.

Having conducted a great many funerals and memorial services, and attended a number of others as one of the mourners, the best I can answer is that some of our grief rituals sometimes comfort some people, help some to bring closure to a chapter of their life, and enable some people to move toward a new beginning. In other words, our funerals and memorial services, with their interlocking beliefs and social and religious, receive a decidedly mixed report.

Here are five suggestions for improving grief rituals and customs.

First, the deceased can best honor the grieving by encouraging them to do what seems most likely to bring comfort, closure, and a new beginning. A person preplanning the grief rituals she/he desires upon death may satisfy that person’s need to exercise control but too often minimizes the value of grief rituals for the bereaved. In many circumstances, preplanning that fully involves those closest to a person affords an opportunity to discuss the reality of death, give thanks for shared lives, and select grief rituals that seem likely to assist the bereaved. Death is a natural part of life, yet often is the metaphorical elephant in the room about which nobody really wants to discuss.

Second, every individual and every family is unique. Choose grief rituals that will aid you and those who grieve; ignore rituals and customs that feel wrong or likely to impose more emotional pain and spiritual stress than help and health. Freely adapt social customs to suit needs and preferences. For example, I remain astounded at the numbers of people who wish to conclude grief rituals with military honors. In most cases, playing taps completely undermines any salutary effort of celebrating the deceased’s life and religious affirmations. Winston Churchill famously remarked that he wanted reveille, not taps, played at his funeral.

Third, grief rituals may be as private or public as the bereaved find helpful. No requirement exists to make a public display of grief.

Fourth, money spent on grief rituals is often an extravagant waste. All coffins eventually return to the earth; bodies, even the best preserved, eventually return to the earth. Whatever life may follow death does not depend upon an expensive burial or embalming. The Egyptian pharaohs’ outrageously expensive mausoleums (i.e., the pyramids), elaborate entombments (i.e., with great wealth, servants, and everything else they thought helpful in another life), and careful embalming benefitted grave robbers and historians rather than the deceased. The modern American way of death, apart from being much cheaper and not requiring anyone’s death, is often little better.

Fifth, the best religious liturgies and rites allow the bereaved ample flexibility for adaptation. Take full advantage of that flexibility, with the assistance of caring clergy, to create grief rituals best suited for the bereaved.

Thinking and talking about death, unless it becomes an obsession, provide a unique opportunity for personal growth along with motivation for valuing each living moment as a precious gift. Death is inevitable. So make the most of it!

Saturday, September 1, 2012

Franklin Graham, Jesus, and HIV/AIDS

Franklin Graham, Billy Graham’s son and the leader of Samaritan’s Purse (a large Christian charity), authored an opinion piece, “Jesus is a model on how the church should respond to HIV/AIDS,” in the Washington Post, July 23, 2012.

I found his opinions outrageous.

He began with this statement:

In many ways today, HIV/AIDS has the same stigmas as leprosy did in Bible times. Leprosy was considered a death sentence. Victims were considered unclean and shunned by their families and communities. Yet, Jesus reached out to them, touched them, loved them, and healed them. This is the perfect representation of how the church should respond to people living with HIV/AIDS.

Had he stopped at that point, I would not have found his opinions outrageous. Jesus does want the Church (which includes people with HIV/AIDS, something Graham does not seem to acknowledge) to reach out to people with HIV/AIDS, touch them, love them, and contribute to their healing. The analogy with leprosy is apt, but, like all analogies, it is imperfect. Medical advances, if treatment is available, now mean that HIV/AIDS is not necessarily a death sentence even though a cure still eludes researchers.

First, Graham’s organization does not provide condoms. People will have sex. This includes married heterosexual couples, one of whom has HIV/AIDS. Distributing condoms and promoting their use slows the rate of HIV/AIDS infections. Education is part of the answer, but only part.

Second, Graham’s essay does not address the issue of rape, a major cause of the spread of HIV/AIDS in sub-Saharan Africa. Speaking to Christians is insufficient. The Church must carry its message into the world, standing against rape, ministering to those with HIV/AIDS, and working to rectify the rampant injustices that rob so many sub-Saharan Africans of hope, dignity, and respect for the worth of others.

Third, Graham writes, “We have to take responsibility for our lives and the decisions we make. That starts with the facts. And the fact is any type of sexual relationship outside a committed marriage between one woman and one man puts you at risk for contracting the virus.” That’s a lie. Homosexual marriage no more puts one at sexual risk than does heterosexual marriage.

Fourth, Graham’s comments appear to deemphasize the importance of the Christian community inclusively welcoming all of God's people, the healthy and the sick. The passing of the peace and Holy Communion afford the body of Christ dramatic opportunities to demonstrate God's radical welcome for all.

Anytime people or a society attempt to declare persons unclean and to shun them, the Church must stand with the unclean and the shunned. No one whom God has created is unclean, a lesson Peter learned in his vision in Acts 10.