Blog

Principle Versus Principal: The Ethical Dimension Of Underwater Mortgages

Principle Versus Principal: The Ethical Dimension Of Underwater Mortgages

Aug 6th, 2012 • Posted in: Commentary

by Ethics Newsline editor Carl Hausman

I teach a variety of college courses centering or touching on ethics and face the continual challenge of communicating the notion that the study of ethics is not just a matter of theory or a series of meditations on abstract issues.

Also, I need to somehow show that ethical decision-making isn’t as simple as consulting a rule book where the right and wrong answers are listed in the appropriate columns. (And, of course, justify my job in the process because such a book, if it existed, would obviate the need for ethics professors.)

IGE founder Dr. Rushworth Kidder recognized these challenges from the get-go and, early in the 1990s, developed categories of “right versus right” decision-making — ways to frame a question in realistic, everyday terms — with the realization that there may be compelling arguments on both sides of the issue. Dr. Kidder further codified four paradigms for weighing such dilemmas: justice versus mercy, individual versus community, long term versus short term, and truth versus justice.

I was reminded of these paradigms as I read the top stories in last week’s financial press and came across a dilemma that might be classified as “principle versus principal.”

Here’s the story: A U.S. government agency called the Federal Housing Finance Agency has declined to allow permanent reductions in principal on “underwater” mortgages (i.e., those in danger of or in default) that are backed by Fannie Mae and Freddie Mac, the giant quasi-governmental enterprises that prop up home loans.

Here’s the translation: The president and many in Congress want the federal government to provide bailout money for financially troubled homeowners who owe more than their houses are worth.

Treasury secretary Tim Geithner backs a bailout, arguing that the White House plan would “provide much needed help to a significant number of troubled homeowners, help repair the nation’s housing market and result in a net benefit to taxpayers,” reports NBC News.

But Edward DeMarco, the Federal Housing Finance Agency head, won’t budge, insisting that there’s no evidence that the plan would stop enough foreclosures to be a plus on the balance sheet and arguing further that the program would encourage people who have taken on more mortgage than they can handle to default in order to reap the benefits of principal reduction.

The last part of the argument not only has an ethical base but is grounded in ethics terminology, invoking the phrase “moral hazard” — effectively rewarding reckless behavior by offering a backstop to those who get in too deep.

Moreover, the question of default itself has become part of a new ethics controversy because many underwater homeowners are engaging in what’s known as “strategic default,” failing to pay the mortgage even though they can do so, betting that the damage to their credit ratings will be less expensive in the long run than will the continued burden of a mortgage on a property now worth much less than the purchase price.

These ethical questions are part of a larger scenario: the unraveling of the global economy, partly due to the collapse in the housing market and the subsequent implosion of securities and other financial vehicles pegged to housing. And, again, it’s a scene that might be viewed as a morality play of sorts as we face a variety of right-versus-right scenarios. Here are just a few examples:

  • Justice versus Mercy: Do we extend a hand to homeowners in financial quicksand even though a justice-based view holds that they knew what they were getting into and gave their word — the most important commodity in a financial system that is glued together by trust?
  • Individual versus Community: In the case of strategic default, is it right to walk away from a debt because the calculus shows it will benefit you individually, even though widespread default is clearly corrosive to the climate of trust that undergirds the entire credit industry? Along the same lines, is it fair to shift individual financial burdens from those who can’t or won’t pay to the broader community of people who can?
  • Long Term versus Short Term: Some would argue that avoiding bailouts and backstops is a matter of a painful treatment for the economy out of concern for its long-term survival. While it is tempting to help those in need now, the argument goes, bailouts only delay the inevitable, allowing the problem fester and grow until it’s incurable.
  • Truth versus Loyalty: While not a perfect corollary, note that the entire concept of bailouts of financial institutions is as much a political issue as a fiscal one. In the early days of the crisis, as documented in journalist Andrew Ross Sorkin’s excellent book Too Big to Fail: The Inside Story of How Wall Street and Washington Fought to Save the Financial System — and Themselves, Treasury officials found themselves weighing the choice of whether to pursue a no-bailout philosophy baked into their political stances or confront what they (in some cases) presumed to be the truth: Letting giant investment banks and related firms fail would produce an economic crisis that would trigger pandemic fiscal catastrophe.

So here we have an ethical case study that is neither theory nor abstract. The decision will in all likelihood affect anyone who owns a home, wants to own a home, pays taxes, or wants to avail themselves of consumer credit.

And while it’s clearly a fiscal issue based on complex projections and assumptions, it’s also an ethical issue because, for one factor, trust in the other party’s good intent is the commodity that keeps the financial system alive and the lubricant that allows its gears to grind.

So, what do you think? Given the current mortgage situation (explained in more detail in the news articles cited below) do you believe the government should offer a bailout to underwater mortgage holders in danger of default? Or should we stand on principle and refuse to reduce principal for those who knowingly incurred the financial obligations that got them in over their heads?

©2012 Institute for Global Ethics

 

Ethics Newsline® editor Carl Hausman is a journalism professor who has written three books on ethics and specializes in explanatory journalism.

For more information, see: American Banker, Aug. 3 — Bloomberg, Aug. 2 — NBC News, Aug. 1 — San Francisco Chronicle, July 31 — Forbes, July 31.

Questions or comments? Write to newsline@globalethics.org.

The School Bus and the Banality of Evil

Reprinted from Ethics Newsline

by Ethics Newsline editor Carl Hausman

By now, you are no doubt familiar with the story of the 68-year-old bus monitor in Greece, New York, who was taunted viciously by four seventh-graders in an incident captured on a student’s cell-phone. But in case you missed it, here’s an overview:

Karen Klein has spent decades as a bus driver and monitor in the town of Greece, a large (population about 100,000) suburb of Rochester. About three weeks ago, she was verbally set upon by four young men, who taunted her with vile ridicule of her weight, graphic threats involving weapons and breaking into her house, and even a sneering reference to the suicide of her son a decade ago.

Klein was reduced to tears but did not respond with anger to the taunting, which persisted for about 10 minutes.

The incident was captured on a student’s cell-phone and posted to YouTube, where it went viral. Something about the event touched nerves worldwide, and an internet-based fundraising effort intended to raise $50,000 for a vacation for Ms. Klein soon raised more than 10 times that amount.

Meanwhile, the ramifications of the event got even uglier, with death threats pouring in against the students and their families. Police appealed for calm, noting that death threats are a form of bullying, too. Late last week, as noted below in this week’s Ethics Newsline, the school district suspended the students for a year. Criminal charges were not brought, in part because of Ms. Klein’s reluctance to press them.

The story raised appropriate speculation about the repulsive actions of the young men who, even though they legally are children, obviously should know better. The story also caused some in the news business to scratch their heads over why, in a world full of repulsive actions mercilessly recited in the drumbeat of every day’s headlines, this incident captured international attention, headlines in major newspapers, and coverage by all major U.S. television networks.

Well, here’s what captured my attention: Unlike the vast majority of stories that flicker across my terminal each day, this one didn’t carry the dateline of an unfamiliar city or exotic war zone. It didn’t feature frightening caricatures of thundering menace, such as uniformed soldiers or gang members bristling with muscle and tattoos. Instead, it happened in the same town where I went to middle school. For all I know, it could have been on the identical bus route I took. And the perpetrators were children about the same age as one of my sons. For all I know, they could be the children of my classmates from a generation ago.

To co-opt a phrase that was applied famously in an entirely different context, the Greece bus incident drives home the “banality of evil.”

German political theorist Hannah Arendt used that terminology to back her theory that atrocities, including those committed by the Nazis, are committed not just by fanatics or psychotics, but by regular people who simply become acclimatized to the deplorable.

Various research projects have bolstered the idea that the complex dynamics of human interaction can send ethics and morals off the rails. The notorious Milgram experiments, which were conducted at Yale shortly after the trial of Adolf Eichmann (the subject of Arendt’s book Eichmann in Jerusalem: a Report on the Banality of Evil), demonstrated that lab assistants were willing to torture subjects if an authority figure assured them that he would assume responsibility for their actions.

About a decade later, the Stanford Experiments demonstrated that students recruited to become “guards” in a mock prison could succumb to peer and supervisor pressure and begin brutalizing their captives.

We know that interactive social pressures can produce the most extraordinary changes in people. Even the bullied Karen Klein said as much when she told CNN that she does not believe her harassers were bad kids, deep down, “but when they get together, things happen.”

But the looming question is why things sometimes don’t happen. Other students on the bus chose not to participate (although none apparently took action to stop the harassment), and one felt moved to post the video.

Not all Germans were Nazis, and not all participants in the Milgram experiment delivered the shocks per their supervisor’s instructions.

One subject who refused to carry out instructions in the Milgram experiment was Joseph Dimow, an editor and columnist who attributed his lack of cooperation to his upbringing and education, which he said made him sympathetic to the oppressed and willing to challenge assumptions.

Is that one of the keys to unlocking the puzzle of how apparently ordinary children, described as “not bad” even by their victim, could lapse into such horrifying behavior?

Had they somehow missed the combination of upbringing and education that should have cued them to throw the off-switch as their behavior devolved?

What sort of training in ethics, critical thinking, or some other area could have prevented this incident? I don’t have the answer, but I’m hoping you do. Please weigh in using the comments box below.

©2012 Institute for Global Ethics

On the Internet, Nobody Knows You’re a Dog

by Ethics Newsline editor Carl Hausman

One of the most popular cartoons in the storied history of the New Yorker — depicting two canines sitting before a computer and praising the benefits of online anonymity — became a symbol of the liberating effects of donning an online cloak when it was published in the mid 1990s.

True: Anonymity offers a constellation of benefits both societal and practical.

For starters, it’s the latest in a long line of social lubricants. Masquerade balls, for example, became popular in the late Middle Ages and early Renaissance as a means of lowering social barriers. Elaborate court etiquette didn’t apply, dancers could mix and flirt without violating court protocol, and even the most shy and reticent could find relief from their inhibitions.

Chat rooms and online dating sites are pretty much a high-tech reincarnation of that premise, easing initial contact and minimizing the sting of personal rejection.

Second, anonymity has a profoundly egalitarian heritage. Mechanisms of communication always have been among the first targets of tyranny since media existed. The Gutenberg Press, for example, sparked not only a revolution in the sharing of ideas and the standardization of language, but also a new obsession for identifying — occasionally torturing and murdering — the instigators of dissent. In Europe, and later in colonial America, strict licensing on the new printing press was imposed, accompanied by draconian and frequently gruesome retribution against those who used it as an instrument of protest.

Rebellious American Revolutionary intellectuals such as Thomas Paine published anonymous works, including the incendiary pamphlet Common Sense, with profound effect.

Lesser-known (at least in the United States) examples of the liberating power of anonymity persisted throughout many intellectual rebellions. In the Soviet Union of the 1950s and ’60s, the practice of Samizdat, which translates to “self-publishing,” allowed circulation of the works of dissident authors such as Boris Pasternak and Aleksandr Solzhenitsyn — writers who raised consciousness about the existence of the gulag and repression of religion.

So seriously did the Soviet Union take the threat of anonymous dissent that it required the licensing of typewriters, with the KGB retaining data on the individual typefaces of each machine so that anyone retyping works of protest could be tracked down and punished.

China, no fan of political dissent, currently requires internet users to register with local police departments, and the government frequently compels internet service providers to turn over the physical locations (generally easily traceable) of computer users who post messages of protest.

Without question, these modern incarnations of anonymous mass communication have propelled liberation in many parts of the world. Recent manifestations include nascent democracy in Myanmar/Burma, greater international awareness of human rights in China, and the domino effect of Arab Spring revolutions.

At the same time, anonymity has an ethical dark side.

It frees the commenter from responsibility for the comment both in terms of truth and motive. Anonymity on the internet produces the same freedom to harass as the telephone once offered the crank-caller in the days before caller ID. But now the scale of the potential damage is magnified far beyond the one-on-one trauma of heavy breathing or requests to check if one’s refrigerator is running.

Anonymous harassment, as well as harassment committed under a false identity, have become a modern technological scourge — bullying on digital steroids.

And anonymous postings on news organizations’ websites — once lauded as conduits for tips and expression that could revive the grand tradition of the Revolutionary pamphleteers — have become both a nuisance and a nightmare for the news industry.

Publications increasingly are banning the practice altogether, not only in response to the worthless vile and inane dreck that’s disgorged, but also because anonymous comments are used as tools for propaganda, spam, self-promotion, and vilification of political or ideological opponents. They also put the publications in legal jeopardy from lawsuits and subpoenas from those who feel they have been defamed and demand the identity of the poster.

Some news sites now require users to register with a traceable name, even though they may post anonymously. Others have developed tiered systems where those who attach names to their comments receive more prominent placement. Others have dumped the anonymous comment box altogether.

Still, isn’t there an advantage — even an obligation — for the online world to offer an outlet for comments that might damage the reputation of a business (or government) but expose wrongdoing? For ideas that buck conventional wisdom, the collective systems of belief that far too often are based on prejudice, intolerance, and the desire to retain power? For leaked information that a government may wall off as a threat to national security when it is simply inconvenient or embarrassing to the powers that be?

Where should society set the ethical boundaries of anonymity?

Actually, I don’t know. But I’m intending to exploit one of the real benefits of online comment boxes and ask you to do my work for me.

What do you think are the ethical limits of anonymity? Please post. And — ahem — do note our comment-box policies.

©2012 Institute for Global Ethics

Sensibility and Shoulder Surfing

by Ethics Newsline editor Carl Hausman

A recent story by the Associated Press set off a wave of indignation about the practice of prospective employers demanding social networking log-in information or simply asking job applicants to open their Facebook page while the interviewer “shoulder surfs” and looks for red flags.

That indignation worked its way up to the top levels of U.S. government when, last week, two congressmen introduced a bill that would ban such practices. In recent months, lawmakers in five states have introduced similar measures.

That’s how law often works: reactive and piecemeal, a finger stuck in the dyke before the next leak appears. While this instance produced a bill in extraordinarily short order, it’s likely that the technology soon will mutate in another direction, requiring yet another law.

In many cases, these laws lag behind technological change for years or decades. In some U.S. jurisdictions, for example, there’s still no applicable law to punish someone who snoops with a video camera if the video contains no sound, because existing laws apply only to the wiretapping of audio conversations. Until recently, some online bullying cases were largely un-prosecutable because no laws had been written to prohibit actions that 15 or 20 years ago were inconceivable.

And that’s pretty much what’s happened with job-interview shoulder surfing. While no one knows the true extent of the practice (the AP story was essentially anecdotal), there isn’t any law against it yet, and the decision about the propriety of the practice falls into the realm of ethics.

Is it unethical? I say yes, but first let me make a case for the practice. Many years ago, I worked for a detective agency — not working on stakeouts in a fedora and trench coat, but sleuthing around the emerging internet and other public records repositories for due diligence research on high-level job applicants, prospective CEOs, and businesspeople seeking major investments. It was a good part-time job for me and many of my chronically underemployed (or simply underpaid) journalist friends, who suddenly found our skills in sifting data and following threads of information valued by a new and lucrative type of employer.

What we turned up was often astonishing: phony degrees, past securities fraud violations, assault charges, and so on. It wasn’t always easy to track this stuff down, and I began to feel some empathy with the person who was going to write the big check and gamble on the new person.

Additionally, there are some circumstances that would seem to warrant an intrusive search. A case in Maryland, for example, involved a corrections officer who was asked to turn over his Facebook password when he was applying to be recertified for his job. The state’s side of the story involves the claim that some applicants for corrections positions have allegiance to gangs. In the Maryland case, the state further claimed that out of about 2,600 applicants for corrections positions, eight had been disqualified in whole or in part because they were observed displaying gang symbols on photos on their Facebook pages.

Still, for the vast number of job applicants, existing procedures, including standard criminal records checks, credit bureau reports, and so forth, certainly seem intrusive enough. The idea of an employer demanding log-in information seems abusive and a sidestep of existing law and procedure governing employment.

An interviewer questioning a job candidate cannot ask certain personal questions about marital status, family, and health. But by compelling a display of a Facebook page, it’s pretty easy to determine some of that: Relationship status often is displayed on the main profile, and Facebook pages certainly will be a likely venue for photos of applicants’ children if they have any.

Does an applicant “like” a local diabetes support group? Could that be a hidden determining factor in rejecting an applicant?

An acquaintance of mine once told me of a trick that one company used to identify which job applicants had children: The interviewer would walk candidates to their cars. What seemed like a mannerly gesture was, in reality, a scan for car seats in the back of the minivan.

Is an expedition into someone’s social media so different? We can’t always ascertain motives, but in a hypercompetitive search for employment, could the private details from within a social media user’s personal boundaries be a deciding, if tacit, factor in a negative decision, even if those details technically are supposed to be off limits?

I suppose anything meant for the general public — for example, any Facebook page that is completely unrestricted — is fair game. But the fact that someone has set up boundaries and demarcations in the form of privacy settings makes the difference between reading a public profile and compelling an applicant to turn over a password analogous to reading a letter to the editor in a newspaper versus demanding someone’s private correspondence.

Finally, there’s the issue of balance of power. While prospective employers can’t force anyone to open their social media pages, how many of the increasingly desperate unemployed would have the nerve to decline such a request — or the scrambled judgment to assume that their refusal won’t make any difference?

That’s my view. What’s yours? Do you think it’s ethical for a prospective employer to demand to see an applicant’s password-protected social media pages? Use the comment boxes below — which, by the way, are open for anyone, including your current or prospective employer, to read.

©2012 Institute for Global Ethics

What Makes Us Unethical?

by Ethics Newsline editor Carl Hausman

The “resignation letter heard round the world” continued to echo last week, as many in the business community and elsewhere shook their heads over the scathing piece in the New York Times in which former Goldman Sachs executive Greg Smith pilloried what he called the “toxic” culture at the firm, blaming a lack of morality for his exit.

Details of the incident and the aftermath are included in this week’s Newsline run-down, but something that might benefit from a little expansion and exploration is the concept of a “toxic ethical culture.”

What is it, specifically, in the culture of an organization that makes ostensibly good people go bad?

There are those who probe such questions. Caveat: As a college professor, I read more research studies than I care to and know that studies, especially isolated polls and experiments, rarely “prove” anything and frequently are contradicted by subsequent research.

Having said that, inquiry into the roots of unethical behavior, along with some plain old eyeball research, does identify some conditions that can cause people to run off the ethics rails.

From what I can see, ethical cultures deteriorate when:

  • We Rationalize. Doctors, one survey contends, are more likely to say it’s alright to accept a gift if the question is phrased in a way that mentions the sacrifice they’ve endured in their career — as opposed to a straightforward query about whether it is acceptable to take a gift. Another survey purported to show that people who believe they have acted with admirable ethics in one area may feel enabled to act unethically in another; the study indicated that those who applaud themselves for buying “green” products were more likely to rationalize bad behavior in other areas in a sort of “moral balancing.”
  • We pay attention to only the “shalt nots,” not the “shalts.” It’s intoxicatingly easy to think that avoiding unethical actions is the same thing as acting ethically. But there are many deceptions and prevarications that come about by not doing something. For example, some critics argue that scientists tend not to publish data that does not suit their purposes. For example, a study showing a particular drug did not perform well in a clinical trial may simply not be written up. There’s no law that requires that every research project be submitted for publication. A fisherman with a six-inch-mesh net might conclude that all fish are bigger than six inches because he’ll never see the ones that got away. The same might be extrapolated for public knowledge about many aspects of science and technology if we only are shown what researchers want us to see. The same analogy can be applied to human behavior. Some researchers, for example, claim that anti-bullying programs are largely ineffective at best because they are simply a list of proscriptions, not affirmative training in how we shall treat each other.
  • We confuse ethics with rules. Generally something is either against the rules or it isn’t. When we reify ethical behavior as something that isn’t specifically prohibited, the meaning is perverted. As Rushworth Kidder often noted, among the better definitions of ethics are “obedience to the unenforceable” and “what you do when no one is looking.” Note that some of the first dominoes in the global economic crisis fell because of actions that were reckless and irresponsible but not specifically illegal, probably because no one had thought to write a rule against such doltish behavior. A side-effect of the “ethics equal rules” syndrome is that the existence of a “code” is interpreted as self-justification of the ethical behavior of the organization. Enron, it might be noted, is said to have had an ethics code totaling 64 pages.
  • We feel superior or privileged. This is a fairly obvious one: Some people clearly believe that standards of behavior apply only to the little people. And while it’s easy to read too much into this, two interesting recent studies purport to show that rich people tend to be ruder drivers and that people who identify themselves as upper-class tend to lie and cheat because they see selfish and greedy behavior as socially acceptable.
  • Authority figures sanction our actions. This tendency was demonstrated graphically by the studies conducted by Yale University psychologist Stanley Milgram in the 1960s. Milgram found that ordinary people could be prompted to deliver what they were led to believe were agonizing electrical shocks to an experimental subject when ordered to do so by a stern and authoritative professor. In 2010, a controversial French television show basically created the same scenario, finding that people were disturbingly compliant when hounded by an authority figure and led to believe they were giving shocks to participants on a television game show.

I’m sure there are many other contributors to a toxic ethical culture; this list is clearly not exhaustive. I’d like to continue this collection but would be grateful if you’d help me do the compiling! Please use the comment area below to explore the question, What are other reasons that cause ethical cultures to go sour?

©2012 Institute for Global Ethics


The Shut-Up Switch

by Ethics Newsline editor Carl Hausman

The hottest item on Google’s list of trending topics late last week was a report from a Philadelphia television station about a man who admitted he used a cell-phone jamming gizmo to block conversations of other riders on his bus.

He told reporters that his improvised (and, it should be noted, illegal) device puts a lid on loud talkers and rude behavior. He also admits taking the law into his own hands, but declares, “Quite frankly, I’m proud of it.”

The Philadelphia bus story was one of several recent items highlighting the fact that technology can be used to stifle as well as stimulate communication. Among the others:

  • In Japan, researchers have developed an instrument that essentially stuns someone into silence by beaming the speaker’s words back at him a split-second later — a delay that short-circuits the brain’s auditory pathways and causes the person to stutter to a halt. The “speech jammer,” according to some reports, is seen as an ideal method to calm a noisy classroom or a roiling demonstration.
  • In Washington, the Federal Communications Commission announced that it is seeking public input on the proper circumstances under which the government can cut off wireless service to protect the public. The inquiry comes after San Francisco transit authorities blocked cell and internet access in some stations last August in order to quell expected organized protests.
  • In London, police have considered asking Parliament for laws that would shut down Twitter feeds in the event that authorities discover that the microblogging service is being used to form crowds and foment a riot.
  • In China, there’s been a crackdown on sales of software that is designed to circumvent the government’s content filters on the internet — filters that routinely excise material related to democracy and protest. Some speculate that that the move is motivated by the government’s fears of an Arab Spring-type political movement fueled by social media.

These developments represent a new twist to a very old ethical problem: the balance between the rights of the individual to expression and the presumed good of the majority to enjoy safety and security.

The Gutenberg press, for example, stirred up all sorts of draconian licensing laws when it became apparent that the machine could interfere with public order by inciting dissent. Several centuries later, pamphleteers found themselves in the government’s crosshairs in the early days of World War I, when someone advocating a munitions strike was charged with endangering troops abroad.

It was that case, in fact, that prompted U.S. Supreme Court justice Oliver Wendell Holmes to reason that the right to expression stopped at the point where words became weapons, such as falsely shouting “fire” in a crowded theater. The intent of such words was to harm, not to inform or express, reasoned Holmes, thus placing them out of the reach of First Amendment protections.

You could argue, and I might agree, that the issue of when it is right to ban communication is more salient than ever because the technology that we expect to enable communication can more effectively inhibit it as well.

For example, pamphleteers throughout history have been highly successful agitators because it is so difficult to root out every single printing press. Radio has for many years been an efficient medium for propaganda and counter-propaganda because portable sets were so common and radio frequencies so diverse that it was impossible to jam them all.

But electronic networks (and neural pathways, apparently) are quite vulnerable to certain types of throttling. Select portions of the United States’ internet backbone could, in fact, be shut down with a single switch.

Sound farfetched? I’m no technical expert, but presumably the U.S. government is, and there is currently a proposal regaining momentum in Congress to provide the president with a “kill switch” for various sectors of the internet.

The Lieberman/Collins/Rockefeller/Feinstein cybersecurity bill appears to be on the fast track for a vote by the full Senate. The measure’s stated purpose is to protect the U.S. banking systems from cyber-attack and to prevent cyber-terrorists from taking control of critical structures such as dams and power plants. Various civil libertarians, however, worry that such kill switch measures could be used to inhibit free speech and other liberties.

How far should the government ethically and legally be able to go in order to prevent insurrection and danger to the common good? Should it be permissible to cut cell-phone communications to short-circuit a riot? Should the president have a kill switch for the internet in the event of a cyber-attack?

I don’t have the answer, but you might. Please post your comments.

Reflections on the Passing of Ethicist and Newsman Rush Kidder

by Ethics Newsline editor Carl Hausman

I began my association with Rushworth Kidder about 20 years ago. As an ex-pat from New York City, I was delighted to start work in idyllic Camden, Maine, where Rush had set up shop in a warren of cluttered offices furnished with dented metal desks.

The small staff — I think it was four or five people then — would sit with Rush for hours and bat around ideas like cat toys. Through this process, he developed many creative approaches and innovations that became part of the foundation of the Institute’s work.

Interestingly, the assumptions that Rush posited in the early days have proved not only durable but prescient. They are astonishingly more salient today than they were two decades ago. They also have undoubtedly gained a greater measure of public acceptance.

Rush would have been the first to caution that that a statistical linkage does not necessarily translate to causation, but I’m sure the Institute played some role in raising awareness of the ideas that have become more or less mainstream today.

Among them:

  • Ethics and rules are not synonymous. In the early days, we battled the assumption that dealing with ethical issues was simply a matter of drawing up a detailed code or set of regulations that would enforce good behavior. That view fell out of favor for many reasons, but right atop the list is the global financial crisis that metastasized after a glut of greed propelled by transactions that — while reckless — were often within the boundaries of the law.
  • Technology leverages the damage made possible by unethical behavior. Rush’s earliest example of this principle coalesced when he became one of the first Western journalists to cover the 1986 Chernobyl disaster. The tragedy was, Rush discovered, not only a technical meltdown but also a moral one: Irresponsible workers had overridden alarms and intentionally jammed safety devices in order to complete an ill-advised set of tests as quickly as possible. The point he made at the time was that a handful of amoral people in charge of a powerful technological device could create disaster unparalleled in simpler times. The message didn’t always hit home among the unimaginative because nuclear reactors were not household items. But today, computers and internet access are, and the pages of Newsline are replete with examples of how digital technology magnifies the threats posed by someone who intends to bully, defame, or inflame.
  • There are global constants in ethics. In the early 1990s, we often would encounter critics who were generally well-meaning but baffled by the contention that there could be universal ethical norms. “It’s all relative,” some said. But research and a growing body of empirical evidence — observable in each day’s news — show otherwise. For two decades, Institute publications, first in print and then online, have focused on how corruption corrodes the welds of confidence necessary to bond commerce and community — in Illinois, India, and everywhere in between. Stories we’ve carried also have documented universal reverence for equity, human rights, privacy, respect for the environment, and fundamental dignity. The names and the datelines are different and those attributes are clearly valued in different proportions by whatever government or cultural institution happens to be in power, but it’s clear that globalization applies also to ethics.
  • Ethical reasoning can be learned. Ethics seminars were not unheard of before the Institute began offering them, but Rush changed the modus operandi. Instead of lists of prohibitions, instead of lectures intended to somehow browbeat “reform” into the listener — strategies that generally have a dismal rate of success — Rush showed how ethical behavior can arise from an organic series of decisions balancing attributes such as justice versus mercy or the rights of the individual versus the good of society. He developed a system that teaches not a series of dictates and recipes, but skills necessary to adapt, improvise, and nimbly navigate the ethical obstacle course of everyday life.

These are just a few of the many facets of Rush Kidder’s legacy, of course, and in the space of this column there’s not room for much more. But to paraphrase an old newsroom saying, at least it was brief and we got our facts straight. Rush would have liked it that way.

by Ethics Newsline editor Carl Hausman

I began my association with Rushworth Kidder about 20 years ago. As an ex-pat from New York City, I was delighted to start work in idyllic Camden, Maine, where Rush had set up shop in a warren of cluttered offices furnished with dented metal desks.
The small staff — I think it was four or five people then — would sit with Rush for hours and bat around ideas like cat toys. Through this process, he developed many creative approaches and innovations that became part of the foundation of the Institute’s work.
Interestingly, the assumptions that Rush posited in the early days have proved not only durable but prescient. They are astonishingly more salient today than they were two decades ago. They also have undoubtedly gained a greater measure of public acceptance.
Rush would have been the first to caution that that a statistical linkage does not necessarily translate to causation, but I’m sure the Institute played some role in raising awareness of the ideas that have become more or less mainstream today.
Among them:
· Ethics and rules are not synonymous. In the early days, we battled the assumption that dealing with ethical issues was simply a matter of drawing up a detailed code or set of regulations that would enforce good behavior. That view fell out of favor for many reasons, but right atop the list is the global financial crisis that metastasized after a glut of greed propelled by transactions that — while reckless — were often within the boundaries of the law.
· Technology leverages the damage made possible by unethical behavior. Rush’s earliest example of this principle coalesced when he became one of the first Western journalists to cover the 1986 Chernobyl disaster. The tragedy was, Rush discovered, not only a technical meltdown but also a moral one: Irresponsible workers had overridden alarms and intentionally jammed safety devices in order to complete an ill-advised set of tests as quickly as possible. The point he made at the time was that a handful of amoral people in charge of a powerful technological device could create disaster unparalleled in simpler times. The message didn’t always hit home among the unimaginative because nuclear reactors were not household items. But today, computers and internet access are, and the pages of Newsline are replete with examples of how digital technology magnifies the threats posed by someone who intends to bully, defame, or inflame.
· There are global constants in ethics. In the early 1990s, we often would encounter critics who were generally well-meaning but baffled by the contention that there could be universal ethical norms. “It’s all relative,” some said. But research and a growing body of empirical evidence — observable in each day’s news — show otherwise. For two decades, Institute publications, first in print and then online, have focused on how corruption corrodes the welds of confidence necessary to bond commerce and community — in Illinois, India, and everywhere in between. Stories we’ve carried also have documented universal reverence for equity, human rights, privacy, respect for the environment, and fundamental dignity. The names and the datelines are different and those attributes are clearly valued in different proportions by whatever government or cultural institution happens to be in power, but it’s clear that globalization applies also to ethics.
· Ethical reasoning can be learned. Ethics seminars were not unheard of before the Institute began offering them, but Rush changed the modus operandi. Instead of lists of prohibitions, instead of lectures intended to somehow browbeat “reform” into the listener — strategies that generally have a dismal rate of success — Rush showed how ethical behavior can arise from an organic series of decisions balancing attributes such as justice versus mercy or the rights of the individual versus the good of society. He developed a system that teaches not a series of dictates and recipes, but skills necessary to adapt, improvise, and nimbly navigate the ethical obstacle course of everyday life.
These are just a few of the many facets of Rush Kidder’s legacy, of course, and in the space of this column there’s not room for much more. But to paraphrase an old newsroom saying, at least it was brief and we got our facts straight. Rush would have liked it that way.
©2012 Institute for Global Ethics

 


©2012 Institute for Global Ethics


Privacy, Profit, and Purpose

by Ethics Newsline editor Carl Hausman

As Facebook readies itself to become a publicly traded corporation, it faces the prospect of unrelenting pressure to turn a quarterly profit. At the same time, it confronts close scrutiny from privacy groups and politicians over how it uses the massive troves of data that it collects from its 845 million users, as the New York Times reports this week.

Google, which already has gone public, is facing a backlash over its plans to combine data from its various services in order to wring more money out of user information. Essentially, Google wants to begin merging information from its search engine, YouTube views, and keywords identified in email to create more specific and profitable profiles. Attorneys general from a number of states have taken issue with this plan, and many privacy watchdog groups are crying foul.

From an ethical point of view, these types of privacy dilemmas are interesting because the issues created by advancing technology usually aren’t centered so much on the invasion of privacy per se but rather on the “repurposing” of information.

Most of us are willing, for example, to post that we “like” a particular product or service. But it has raised eyebrows — and some hackles — when that “like” information is not only used to steer appropriate ads our way but used in advertising, with our “like” nod of approval and our photograph inserted into an ad that is circulated to our virtual “friends.”

Repurposed information always made us a little uneasy, even before the widespread availability of computer databases. When I volunteered at a museum in the early 1980s, I was surprised to learn that the museum sold its member list to companies that were advertising products that, for whatever reason, were deemed to be attractive to museum members. (I learned this from an angry member who had deduced the source of some unwanted sales calls.) The museum also sold its list to other museums, apparently because a member of one is more likely to be sold on a membership to another.

The member’s complaint, I thought, was valid. Paraphrased: I joined a museum. I didn’t sign up to have my name and address sold to salespeople. I do, though, remember a verbatim quote from his harangue: “I want to deal with people I can trust.”

As my full-time job was, and has always been, journalism, I repurposed that complaint into articles, books, and presentations that became something of a cottage industry about information ethics. It’s a durable field because the advent of the personal computer and readily available programs to mix and match data reliably generates controversy.

In the 1990s, when computers and database software went mainstream, I began to see how repurposed information could be put to unexpectedly sinister uses. For example, an enterprising database programmer discovered that he easily could collect public-domain data on anyone who had ever sued a landlord. And for a fee — paid by a landlord — he would search his database and see if a prospective tenant was on it. While some landlords might have appreciated the repurposed information, it should be noted that there was no mention of the disposition of the suits or if they were justified.

A decade or so later, the techniques became a little more cringe inducing. Databases were established that identified people who had filed malpractice suits; these databases were intended to be used by doctors to screen prospective patients. While this may have had minimal impact on people living in urban areas, it had the potential to create considerable hardship for, say, women who lived in rural areas who needed an obstetrician.

Both practices eventually were declared illegal, at least in the states where the incidents achieved their first notoriety, but it took many years for the law to catch up to the technology.

The lag between abuse and judicial remedy can be long and even indefinite. We still don’t have definitive case law on conundrums such as:

  • How your EZPass data can be used to monitor your movements
  • The extent to which your driver’s license can be used as an instrument of social control (denying renewal of a driver’s license to someone who owes child support or library fines, for example)
  • If and how your movements, as tracked by a GPS in your smartphone, can be used in advertising to attract you to the coffee shop a block away

Today, the level of unease about privacy has reached a level unequalled in the three decades or so that I’ve been covering the issue. Regulation could be accelerated and it could be heavy-handed, making it a double-edged weapon. As much as I cherish my privacy, I also like the power of my Gmail account, the stream of news tailored to my personal surfing habits, the facility to compensate for having a terrible sense of direction by seeing my destination on Google Street View, and the ability to reconnect with 240 high school Facebook “friends,” which is probably about twice the number of physical friends I actually had back then.

It may not even require legislation to tamp down the positive aspects of new media. Consumers may even vote with their feet soon, fleeing from a cyberspace Wild West where they fear ambush from beyond the next hill.

Why could regulation and public disapproval stanch our electronic lifelines? There are two reasons. One is obvious: People are worried about repurposed and recombined data stripping their privacy and being fused into a fissionable mass of deeply personal information that turns their private lives into public commodities.

But the other and perhaps larger reason is a question of ethics: Companies trading in personal information, particularly Facebook, have subscribed for too long to the old maxim that it’s easier to apologize than to ask permission. Virtually every week we see stories (many of them covered in Newsline) about major online firms that have been forced, usually under the threat of legislation, legal action, or consumer outrage) to back away from some subterranean overreaching centered on user information.

So while I hope I don’t have my electronic tentacles cut by regulation and while I hope that I don’t have to seriously consider cutting back or ending altogether my activities on various Google services or in social media, either is a possibility.

I just want to deal with people I can trust.

Seek and You Shall Be Found

Feb 21st, 2012 • Posted in: Commentary

by Ethics Newsline editor Carl Hausman

Here’s a scenario: Candidate A, a Republican, is under attack by Candidate B, a Democrat. Candidate B is claiming that Candidate A will cut funding for mammograms. The attack is aimed at diverting support from Republican women, viewed as a vulnerable segment of Candidate A’s likely voter pool. Candidate A counterattacks by distributing a video ad that describes his mother’s ordeal with breast cancer and accuses his opponent of using scare tactics.

The wrinkle: It’s an internet pop-up video ad, and it appears only on the screens of women identified as Republicans who have searched for information about breast cancer.

If this scenario makes you uneasy about the mixture of politics and internet data collection as well as concerned about what might happen when such a tactic becomes possible, I must note that your concerns are so three years ago. What I described is an actual ad used by then-candidate Chris Christie, a Republican, to rebut campaign claims by then-incumbent New Jersey Gov. Jon Corzine, a Democrat.

As reporter Tanzina Vega recounted in an excellent article in the February 20 New York Times, “microtargeted” ads are part of a revolution that sees digital advertising currently accounting for about 15 percent of total campaign spending. The technique is being utilized by the current crop of GOP presidential candidates, including Mitt Romney, who produced separate ads for those recipients who statistical analysts determined are not aligned yet with a political candidate (for them, an image ad, designed to stress Romney’s likeability and family orientation) and another aimed at likely supporters (in that case crafted with a much more strident message and urging them to be certain to go to the polls).

There are two major technologies behind microtargeting, both highly developed but generally not blinking too brightly on the public’s radar.

First is search and keyword analytics, the process of tracking a surfer’s searches and vocabulary and serving up ads based on what the search engine’s algorithm determines are the user’s exploitable interests.

Try it yourself: Use Google’s search engine to explore the word “bankruptcy.” You’ll get search results, all right, but also a collection of (clearly identified) paid ads for law firms and other agencies trying to sell you services related to bankruptcy resolution.

You’ll notice the same phenomenon in the ads that appear next to your email in many web-based email programs. Key words are identified and likely ads summoned based on an analysis of your messages’ content. (The technology isn’t perfect: When I was a member of a local municipal board and made an unpopular decision, I received a heated email from a constituent who accused me of being a “wolf in sheep’s clothing,” a message charmingly accompanied by ads for wool-washing detergents.)

Second is the science of predictive analytics. Number-crunchers are becoming expert at prognosticating future habits from past actions and can uncannily utilize aggregated data, such as purchase habits, to assess your buying predilections. Do purchases on your store card indicate that you’ve bought a pregnancy test and vitamin supplements? Be prepared for on onslaught of ads for diapers and baby food, because you are an ideal customer — reachable and persuadable before you actually start shopping for the product.

Discussion of the ethical implications of data collection is nothing new; Newsline generally features a story each week on online privacy. But the emerging trend of microtargeted political advertising raises three unique ethical questions:

  1. Is it right for a candidate to fragment a political message into so many pieces that it essentially can tell anyone what they want to see and hear (at least according to the algorithms)? We’re all aware of the scenarios popular in political comedies in which the candidate promises one thing at the first whistle-stop but makes precisely the opposite promise 50 miles down the tracks. On occasion, an enterprising reporter along for the whole ride would catch the double-talker in the act, but it’s unclear if the ephemeral nature of pop-up ads lends them to reliable scrutiny for consistency. Politics fashioned in an echo chamber is assuredly not a promising addition to enlightened political discourse.
  2. Is there moral justification for the intrusive mining of data and coupling the process to persuading voters? It’s one thing to hawk small-breed dog food to someone who buys a Pekingese at the pet store and discloses the information via a store “rewards” card. It’s perhaps quite another to use voter registration information coupled with profile data based on web pages viewed, publications read, and charitable contributions made. One can make a reasoned argument that public policy decisions have an eminently more profound implication than consumer purchases and ought not to be treated as purely commercial commodities.
  3. Does targeted identification of valuable or malleable voters exclude some from the process? Vega’s piece points out that some critics contend that microtargeted advertising based on composite data could exclude those deemed to be members of a less desirable or less persuadable demographic, effectively “redlining” certain groups out of a portion of the political process.

As is often the case with technological innovation, the technology gallops ahead while the law lags behind, and even if the law seemingly catches up, the technological gazelle usually has darted in an entirely different direction. Ethics, responsible judgment, and informed scrutiny, therefore, are probably the only viable tools for heading off what could be a cyber-unbalancing of an electoral process that already is badly out of kilter.

©2012 Institute for Global Ethics


Anthony Weiner and the Consequences of Consequentialism

 

by Ethics Newsline editor Carl Hausman

There are very few things left uncovered (literally) in the sad unraveling of Rep. Anthony Weiner — except possibly the mechanics of our ethical justification of our interest in the story. (”Our” meaning we, the public, and our surrogates, the media.)

A case like Weiner’s is clearly catnip to the media and it probably should be. The apparent derailment of a public official seems to fit all of the existing definitions of news that have existed since the founding of the institution in colonial America — and an important function of the news business in the eyes of the framers of the Constitution was scrutiny of the government and the people who comprise it.

But tucked into an assumption like the one I just made is the consequentialist flag that we wave when we say, “Normally, we wouldn’t be interested in this sort of thing, except….”

Consequentialist arguments are powerful and often semantically built as above, resting on the foundation that an exception to the normal way of doing things will produce an outcome that justifies the exception. Consequentialist arguments are powerful because at the end of the debate they often trump the counter-view because of an inherent inconsistency in the framework of the proposition. The non-consequentialist, Kantian-type argument — that we should stick to the rules because we can’t predict consequences — is really a consequentialist argument once you peel away the top layers.

After all, by predicting that you can’t predict a consequence, I am predicting a consequence.

In any event, consequentialism in the coverage of politicians’ sexual escapades has always been the primary justification for extraordinary excursions into what would normally be private territory. In modern history, the consequentialist argument went on steroids in the 1988 U.S. presidential election, when the Washington Post’s Paul Taylor famously popped the “have you committed adultery?” on candidate Gary Hart.

Hart not only denied being adulterous but challenged reporters to follow him around and check his claim, which of course they did, and the rest is history.

The most salient result was not Hart’s withdrawal from a race he might have won, but the rewriting of the unofficial rulebook to make the adultery question fair game.

Indeed, the consequentialism invoked was a convincing argument: We showed that Gary Hart possesses extraordinarily bad judgment: He lied and challenged us to catch him in the lie and then behaved recklessly. Is this the type of person we want leading the free world? Ergo, the ‘have you committed adultery?’ question is a public service.

The media-savvy Bill Clinton, no stranger to scandal, devised a way to circumvent the Hart dilemma by telling the truth, sort of. In an interview with “60 Minutes,” he admitted past “problems” in his marriage but provided no further specifics. In effect, he inoculated himself by giving himself a small dose of the scandal virus, waited for the malaise to pass, and won two terms (during which he suffered a relapse).

In the Wiener case, there were other factors at play in the construction of the consequentialist argument for coverage of all of the details. His techno-sexual derangement probably wasn’t enough to unseat him, at least via the details that initially emerged; he was, after all, not the leader of the free world and, as his supporters (initially) claimed, he had a good record in issues related to the welfare of his district, making his digital dalliances irrelevant. Rather, it was the fact that he clearly and artlessly lied about it that propped up the consequentialist proposition that someone who lies so brazenly can’t be trusted, and if he lies about one thing he’ll lie about everything, therefore our scrutiny was justified….

 

And here’s another reason why consequentialist ethical arguments are so powerful: Retrospective arguments are easy to make because you get to shoot the arrow into the wall first and paint the bull’s eye around it later. But such simple cause-and-effect reasoning circumvents some of the nuances of the issue, including whether all aspects of a public person’s life are legitimately public business, and whether a public official is obliged to copy a page from St. Augustine and fill 13 books with his confessions.

Is lying an automatic disqualification for public life? That’s a complex ethical question obscured by the extremes of the Wiener case. Some would argue, for example, that it’s acceptable to lie under certain circumstances when confronted with a gratuitous threat — an intrusion that is unjustified to begin with. Others would contend that a “no comment” or a Clinton-esque prevarication (”that depends on what the meaning of ‘is’ is”) is just as bad as a lie because it serves to divert attention and inquiry from a matter of legitimate public concern.

My point is assuredly not to defend Wiener. And I’m not saying the whole sorry mess does not justify media coverage, because that’s clearly not the case and I am just as clearly covering it right now.

What I do want to posit is the notion that it wouldn’t hurt to take a close look at the consequentialist arguments we employ when negating a public official’s desire for certain boundaries in his or her private life.

The main argument deployed by Kantian non-consequentialists is that you can invent almost any rationale to justify an action, as was the case in some tangents of the Weiner story. Disclosure of his wife’s pregnancy became a matter of public interest because it was marginally part of the story. A young woman who was apparently an innocent bystander — she was, she says, sent one of Weiner’s digital images by mistake — became the focus of full-tilt media scrutiny because she was “part of the story.”

There is almost literally no end to the “part of the story” thread if you get to make up the rationale as you go along or, even more conveniently, after the fact. That’s not always a bad thing; personally, I think Gary Hart’s judgment is suspect and that Anthony Weiner is unfit for Congress and possibly mentally unwell. Both make insincere victims because they brought much of the scrutiny on themselves — Hart by challenging the media and Wiener by using an obviously insecure medium to transmit messages that would be inappropriate in any context (although it does raise the interesting question of whether boorish behavior online is any difference in substance from similar communication by telephone or in person at a bar).

At the same time, I know many serious-minded and capable men and women who have avoided public service — even county and local elective and appointed offices — because of the lurking fear that they could not survive living in a glass house because anyone who wants to throw a stone can confect a convenient excuse for doing so.

I realize this isn’t the first time the question has been asked, but where do we draw the line between private lives and public interest? What are the main consequentialist factors in an action that disqualifies someone from elected office? Is there a calculus we can use to justify an intrusion we normally wouldn’t make into what we normally would consider a private aspect of life?

I’d like to hear your thoughts. What’s your formula for finishing the statement, “Normally we wouldn’t publish details of someone’s sexual indiscretions, except….”