Defending Wretched Excess

Originally Published Oct 1st, 2012 in Ethics Newsline • Posted in: Commentary

by Ethics Newsline editor Carl Hausman

Last week, I addressed a group of international journalists who were touring the United States as part of an educational outreach program. The central question on the minds of a couple of them seemed to be why the United States tolerates — and seemingly condones — media-based dissemination of vicious opinion.

The topic, of course, was the violence related to the Innocence of Muslims film, a slapped-together propaganda piece ridiculing the Prophet Muhammad. Why, our guests wondered, didn’t Google, the corporate parent of YouTube, simply take the piece down, and if they refused (which Google did), why didn’t the government make YouTube pull the video?

While one journalist seemed in full accord with protection of speech and press in general, she asked, in so many words, why the United States can’t be reasonable, defending free speech when it was meaningful but letting the hate-mongers fend for themselves?

There’s no simple answer to that. We do draw the line at many types of expression, including inciting riot, hate speech, and child pornography. But, in general, it is safe to say that the United States historically defends excess, even wretched excess, because the framers of the Constitution and those who have interpreted it have fretted consistently about the slippery slope of suppression.

It’s hard to overstate the importance of free expression in the American ethic. The Declaration of Independence could not have been written, at least not in the form in which it emerged, without the emergence of uncensored mass media.

The printing press survived many skirmishes with the kings and queens of the era who saw it (correctly) as a threat to the established order and imposed harsh licensing laws. Potential penalties for publishing unauthorized material included cutting off the right hand of the offending printer.

The ruling class, it turned out, was exactly right in its fear of the democratizing effects of the printing press. The technology enabled the circulation of philosophical treatises read and admired by the founders, especially Jefferson, who absorbed tracts affirming the existence of basic human rights.

Moreover, the output of the press caused a circular erosion of the power of totalitarian governments because the availability of printed documents spurred greater literacy, which in turn spread the ability to access subversive ideas.

Much of the outrage of American colonists was directed at censorship, and publishers such as Ben Franklin risked the wrath of the crown by publishing without official approval. (And the crown did not mess around when it came to revolution: The penalty for treason was death by public disembowelment.)

After the Revolutionary War, it became obvious that even a people who hated government had to have one, and the founders crafted a Constitution that severely limited the power of elected and appointed officials. Even this careful balancing act didn’t have the full confidence of the citizens who had to ratify the document, so a set of amendments was added to protect against abuses such as unlawful search and seizure, restriction of religious practice, and — perhaps most famously — freedom of speech and of the press.

The First Amendment as it is written seems to offer blanket, complete protection for speech and press and, as such, it proved impractical and more or less unenforceable for more than a century after its establishment. Under the Adams administration, for example, laws were passed that made it a crime to criticize the government. Lincoln shut down newspapers opposed to his Civil War policies and jailed editors.

It wasn’t until World War I — the Unites States’ entry into what for some was a mystifying foreign war, with criticism of that war spread by an increasingly sophisticated mass media — that the Supreme Court was called on to make sense of a Constitution that said Congress (and by extension, no lower arm of government “shall make no law respecting … freedom of speech and of the press,” while at the same time confronting the reality that unbridled speech could pose dangers to security and safety.

In cases involving anti-war protests, Supreme Court justice Oliver Wendell Holmes devised a calculus that elegantly reconciled the conflict between Constitutional guarantees and wartime realities — the “clear and present danger” doctrine, the theory that falsely shouting Fire! in a crowded theater is not protected expression because it’s not expression at all; rather, the words are like weapons — blunt instruments designed to cause harm — uttered in a situation when there is not time to discuss their propriety. Expression and advocacy thus were covered neatly under the First Amendment umbrella; fighting words, betrayals of allied troop movements, and other dangerous forms of expression with no redeeming merit were left out in the rain.

In the 1960s, protections for free expression were broadened further: In the 1964 Times v. Sullivan case, the court ruled that public figures could not successfully sue for libel unless they not only could prove that the defamatory statement was wrong but that it was made with the intent to harm, and in 1968 the High Court further narrowed the “clear and present danger” test for excluding free-speech protection to the so-called Brandenburg Test, which requires that unprotected speech advocate imminent, lawless, action that is likely to occur.

The Brandenburg case, like the Innocence of Muslims film, involved ugly speech — in that case, the ranting of a KKK leader. But it affirmed the notion that the First Amendment can’t be applied in only the easy cases.

By the Brandenburg test, still the benchmark today, the YouTube film seems to fall under the First Amendment umbrella, according to most analyses, because it does not seek directly to incite violence and the imminence of riots halfway around the world could not reasonably have been foreseen.

So in deference to the continuation of the trend of free-speech protections, I argued to the visiting journalists that giving the benefit of the doubt to dubious advocacy makes us swallow hard and that — in the long run — the benefits outweigh the drawbacks.

Writing for a unanimous Supreme Court in a 1940 case involving vituperative speech against the Catholic religions, justice Owen Roberts held that even though the expression was offensive, the right to be boorish is a net benefit in the long run.

“In spite of the probability of excesses and abuses,” read the ruling, “these liberties are, in the long view, essential to enlightened opinion and right conduct on the part of citizens of a democracy.”

To put it another way, and to paraphrase First Amendment scholar Charles Haynes, laws that restrict freedom of speech may — today — protect your sensibilities. But tomorrow, they may be used to censor your beliefs.

©2012 Institute for Global Ethics

Underwater Ethics and Education

Originally posted in Ethics Newsline, Sep 4th, 2012 • Posted in: Commentary

by Carl Hausman

From the “just when you thought it couldn’t get any worse” file: Student loan debt, which totals about $1 trillion in the United States alone, is now larger than all U.S. credit card debt.

And like all debt, student loan obligations have the potential not only to be a personal debt bomb, destroying individual finances, but also to spread financial collateral damage throughout the entire economy by stifling credit and spending.

The figures are jaw dropping:

  • More than three million households owe at least $50,000 in student debt, according to an analysis cited by theChicago Tribune, which quotes Illinois attorney general Lisa Madigan characterizing the debt crisis as “a threat to our country’s financial stability … [with students] on track to become part of a generation burdened by debilitating debt, limited career prospects and therefore long-term financial insecurity.”
  • Ninety-six percent of students in private, for-profit colleges take out some sort of student loan, says a Congressional report. Fifty-six percent of students at private nonprofit colleges take out student loans, and 48 percent of enrollees at public nonprofit colleges do likewise.
  • About two-thirds of recent graduates have more than $25,000 in student-loan debt.
  • About 9 percent of those who have taken out student loans have defaulted, notes Forbes.
  • The burden is not confined to the young, who presumably have decades to sort out their financial woes: The U.S. Treasury Department says that more than 100,000 Social Security retirement checks are being garnished for student loan default; many retirees went into debt to finance children or grandchildren, but some were seeking new career skills for themselves. More than two million defaulters are over the age of 60, according to one analysis.

And, as is the case with the mortgage bomb, controversy over student-loan debt revolves in many ways around questions of ethics. Among the dilemmas:

Do debtors shoulder the entire blame for their actions, or is their responsibility mitigated by the techniques used to entice them to borrow money for what turned out to be less-than-marketable training? A recent U.S. Senate report slammed the private, for-profit college industry, saying that educators had overpromised and underdelivered, luring often-unsophisticated students into entering programs for which they had to borrow heavily. Many who testified before the Senate claimed that their lives are being ruined by staggering debt attached to what turned out to be worthless training. Others said that they dropped out of educational programs because of inadequate mentoring by educators. But, others counter, these are adults who made their own choices. Should colleges be held accountable for a consumer’s decision? For a student’s lack of ambition? Is an unwise choice of career path the fault of anyone except the person who made the choice? Is dropping out the fault of the student or of the educator?

Along the same lines, should colleges be expected to predict who will do well in the post-college job market, who will be able to repay loans most efficiently, and who will be least likely to default? The U.S. federal government — the major player in the student loan industry — imposes sanctions on colleges with a high default rate. California Watch, a nonprofit investigative journalism center, reports that 16 California community colleges have stopped participating in the federal student loan program because of fears that they will lose federal aid if their default rates climb. Whose responsibility is it to predict the viability of a particular career path? And should that even be a consideration? Should those undertaking studies in fields not seen as particularly lucrative at the moment (e.g., the humanities, music, journalism) effectively be cut off from student loans?

Should student loan obligations be forgiven or eased? Federally backed student loans are extraordinarily difficult to escape. As the New York Times reports, even conventional bankruptcy won’t free a debtor since Congress specifically exempted student-loan debt from bankruptcy protection in the 1970s after lawmakers became outraged over reports of new doctors and lawyers filing for bankruptcy and cleaning the slate for a lifetime of pure profit. Later, exempting student-loan debt from bankruptcy protection was broadened and extended to private lenders. Today, those seeking relief from student loans must go through a separate court process in which they not only have to prove insolvency but prove a “certainty of hopelessness” for the rest of their careers. As a society, we recognize the necessity of honoring one’s promise to pay, but also factor in the realization that extraordinary circumstances can demand relief from debt that is impossible to repay.

A sweeping “bailout” of student loans isn’t on the radar yet, and some knowledgeable observers in government contend that this category of “debt bomb” won’t produce the same sort of meltdown as mortgage debt. Of course, knowledgeable observers in government also downplayed the risk of mortgage-backed securities, so it’s reasonable not to take anyone’s word as the last word on the matter.

My view is that adults, even young ones, generally must stand by an obligation. Trust — not just money — is what propels an economy. (When you deposit money in a bank, you trust that the bank eventually will be able to return it to you, even though you know — or should know — that the bank does not keep all depositors’ money in a vault and that it would be unable to return all funds if all depositors suddenly decided to withdraw their savings at once.)

Having said that, I support protection and relief from deceptive transactions. There is evidence, as presented in recent Senate hearings, that some for-profit institutions goaded students into financing their education with public and private loans but withheld the fact that their programs did not conform with credentialing or accreditation standards, leading potential students to believe they would be employable when that assuredly was not be the case.

Finally, I can’t rule out the utility of an eventual bailout if it serves the greater good. While the verdict isn’t in yet on whether government bailouts of major investment banks were an unqualified success, the action did thaw credit markets — allowing business of all types to proceed — and returned the investment houses and firms that insured them to profitability.

What’s your view? Do you support — ethically and practically — the concept of some sort of bailout for those who are underwater with their student loans? Let us know in the comment boxes below.

©2012 Institute for Global Ethics

Principle Versus Principal: The Ethical Dimension Of Underwater Mortgages

Principle Versus Principal: The Ethical Dimension Of Underwater Mortgages

Aug 6th, 2012 • Posted in: Commentary

by Ethics Newsline editor Carl Hausman

I teach a variety of college courses centering or touching on ethics and face the continual challenge of communicating the notion that the study of ethics is not just a matter of theory or a series of meditations on abstract issues.

Also, I need to somehow show that ethical decision-making isn’t as simple as consulting a rule book where the right and wrong answers are listed in the appropriate columns. (And, of course, justify my job in the process because such a book, if it existed, would obviate the need for ethics professors.)

IGE founder Dr. Rushworth Kidder recognized these challenges from the get-go and, early in the 1990s, developed categories of “right versus right” decision-making — ways to frame a question in realistic, everyday terms — with the realization that there may be compelling arguments on both sides of the issue. Dr. Kidder further codified four paradigms for weighing such dilemmas: justice versus mercy, individual versus community, long term versus short term, and truth versus justice.

I was reminded of these paradigms as I read the top stories in last week’s financial press and came across a dilemma that might be classified as “principle versus principal.”

Here’s the story: A U.S. government agency called the Federal Housing Finance Agency has declined to allow permanent reductions in principal on “underwater” mortgages (i.e., those in danger of or in default) that are backed by Fannie Mae and Freddie Mac, the giant quasi-governmental enterprises that prop up home loans.

Here’s the translation: The president and many in Congress want the federal government to provide bailout money for financially troubled homeowners who owe more than their houses are worth.

Treasury secretary Tim Geithner backs a bailout, arguing that the White House plan would “provide much needed help to a significant number of troubled homeowners, help repair the nation’s housing market and result in a net benefit to taxpayers,” reports NBC News.

But Edward DeMarco, the Federal Housing Finance Agency head, won’t budge, insisting that there’s no evidence that the plan would stop enough foreclosures to be a plus on the balance sheet and arguing further that the program would encourage people who have taken on more mortgage than they can handle to default in order to reap the benefits of principal reduction.

The last part of the argument not only has an ethical base but is grounded in ethics terminology, invoking the phrase “moral hazard” — effectively rewarding reckless behavior by offering a backstop to those who get in too deep.

Moreover, the question of default itself has become part of a new ethics controversy because many underwater homeowners are engaging in what’s known as “strategic default,” failing to pay the mortgage even though they can do so, betting that the damage to their credit ratings will be less expensive in the long run than will the continued burden of a mortgage on a property now worth much less than the purchase price.

These ethical questions are part of a larger scenario: the unraveling of the global economy, partly due to the collapse in the housing market and the subsequent implosion of securities and other financial vehicles pegged to housing. And, again, it’s a scene that might be viewed as a morality play of sorts as we face a variety of right-versus-right scenarios. Here are just a few examples:

  • Justice versus Mercy: Do we extend a hand to homeowners in financial quicksand even though a justice-based view holds that they knew what they were getting into and gave their word — the most important commodity in a financial system that is glued together by trust?
  • Individual versus Community: In the case of strategic default, is it right to walk away from a debt because the calculus shows it will benefit you individually, even though widespread default is clearly corrosive to the climate of trust that undergirds the entire credit industry? Along the same lines, is it fair to shift individual financial burdens from those who can’t or won’t pay to the broader community of people who can?
  • Long Term versus Short Term: Some would argue that avoiding bailouts and backstops is a matter of a painful treatment for the economy out of concern for its long-term survival. While it is tempting to help those in need now, the argument goes, bailouts only delay the inevitable, allowing the problem fester and grow until it’s incurable.
  • Truth versus Loyalty: While not a perfect corollary, note that the entire concept of bailouts of financial institutions is as much a political issue as a fiscal one. In the early days of the crisis, as documented in journalist Andrew Ross Sorkin’s excellent book Too Big to Fail: The Inside Story of How Wall Street and Washington Fought to Save the Financial System — and Themselves, Treasury officials found themselves weighing the choice of whether to pursue a no-bailout philosophy baked into their political stances or confront what they (in some cases) presumed to be the truth: Letting giant investment banks and related firms fail would produce an economic crisis that would trigger pandemic fiscal catastrophe.

So here we have an ethical case study that is neither theory nor abstract. The decision will in all likelihood affect anyone who owns a home, wants to own a home, pays taxes, or wants to avail themselves of consumer credit.

And while it’s clearly a fiscal issue based on complex projections and assumptions, it’s also an ethical issue because, for one factor, trust in the other party’s good intent is the commodity that keeps the financial system alive and the lubricant that allows its gears to grind.

So, what do you think? Given the current mortgage situation (explained in more detail in the news articles cited below) do you believe the government should offer a bailout to underwater mortgage holders in danger of default? Or should we stand on principle and refuse to reduce principal for those who knowingly incurred the financial obligations that got them in over their heads?

©2012 Institute for Global Ethics

 

Ethics Newsline® editor Carl Hausman is a journalism professor who has written three books on ethics and specializes in explanatory journalism.

For more information, see: American Banker, Aug. 3 — Bloomberg, Aug. 2 — NBC News, Aug. 1 — San Francisco Chronicle, July 31 — Forbes, July 31.

Questions or comments? Write to newsline@globalethics.org.

The School Bus and the Banality of Evil

Reprinted from Ethics Newsline

by Ethics Newsline editor Carl Hausman

By now, you are no doubt familiar with the story of the 68-year-old bus monitor in Greece, New York, who was taunted viciously by four seventh-graders in an incident captured on a student’s cell-phone. But in case you missed it, here’s an overview:

Karen Klein has spent decades as a bus driver and monitor in the town of Greece, a large (population about 100,000) suburb of Rochester. About three weeks ago, she was verbally set upon by four young men, who taunted her with vile ridicule of her weight, graphic threats involving weapons and breaking into her house, and even a sneering reference to the suicide of her son a decade ago.

Klein was reduced to tears but did not respond with anger to the taunting, which persisted for about 10 minutes.

The incident was captured on a student’s cell-phone and posted to YouTube, where it went viral. Something about the event touched nerves worldwide, and an internet-based fundraising effort intended to raise $50,000 for a vacation for Ms. Klein soon raised more than 10 times that amount.

Meanwhile, the ramifications of the event got even uglier, with death threats pouring in against the students and their families. Police appealed for calm, noting that death threats are a form of bullying, too. Late last week, as noted below in this week’s Ethics Newsline, the school district suspended the students for a year. Criminal charges were not brought, in part because of Ms. Klein’s reluctance to press them.

The story raised appropriate speculation about the repulsive actions of the young men who, even though they legally are children, obviously should know better. The story also caused some in the news business to scratch their heads over why, in a world full of repulsive actions mercilessly recited in the drumbeat of every day’s headlines, this incident captured international attention, headlines in major newspapers, and coverage by all major U.S. television networks.

Well, here’s what captured my attention: Unlike the vast majority of stories that flicker across my terminal each day, this one didn’t carry the dateline of an unfamiliar city or exotic war zone. It didn’t feature frightening caricatures of thundering menace, such as uniformed soldiers or gang members bristling with muscle and tattoos. Instead, it happened in the same town where I went to middle school. For all I know, it could have been on the identical bus route I took. And the perpetrators were children about the same age as one of my sons. For all I know, they could be the children of my classmates from a generation ago.

To co-opt a phrase that was applied famously in an entirely different context, the Greece bus incident drives home the “banality of evil.”

German political theorist Hannah Arendt used that terminology to back her theory that atrocities, including those committed by the Nazis, are committed not just by fanatics or psychotics, but by regular people who simply become acclimatized to the deplorable.

Various research projects have bolstered the idea that the complex dynamics of human interaction can send ethics and morals off the rails. The notorious Milgram experiments, which were conducted at Yale shortly after the trial of Adolf Eichmann (the subject of Arendt’s book Eichmann in Jerusalem: a Report on the Banality of Evil), demonstrated that lab assistants were willing to torture subjects if an authority figure assured them that he would assume responsibility for their actions.

About a decade later, the Stanford Experiments demonstrated that students recruited to become “guards” in a mock prison could succumb to peer and supervisor pressure and begin brutalizing their captives.

We know that interactive social pressures can produce the most extraordinary changes in people. Even the bullied Karen Klein said as much when she told CNN that she does not believe her harassers were bad kids, deep down, “but when they get together, things happen.”

But the looming question is why things sometimes don’t happen. Other students on the bus chose not to participate (although none apparently took action to stop the harassment), and one felt moved to post the video.

Not all Germans were Nazis, and not all participants in the Milgram experiment delivered the shocks per their supervisor’s instructions.

One subject who refused to carry out instructions in the Milgram experiment was Joseph Dimow, an editor and columnist who attributed his lack of cooperation to his upbringing and education, which he said made him sympathetic to the oppressed and willing to challenge assumptions.

Is that one of the keys to unlocking the puzzle of how apparently ordinary children, described as “not bad” even by their victim, could lapse into such horrifying behavior?

Had they somehow missed the combination of upbringing and education that should have cued them to throw the off-switch as their behavior devolved?

What sort of training in ethics, critical thinking, or some other area could have prevented this incident? I don’t have the answer, but I’m hoping you do. Please weigh in using the comments box below.

©2012 Institute for Global Ethics

On the Internet, Nobody Knows You’re a Dog

by Ethics Newsline editor Carl Hausman

One of the most popular cartoons in the storied history of the New Yorker — depicting two canines sitting before a computer and praising the benefits of online anonymity — became a symbol of the liberating effects of donning an online cloak when it was published in the mid 1990s.

True: Anonymity offers a constellation of benefits both societal and practical.

For starters, it’s the latest in a long line of social lubricants. Masquerade balls, for example, became popular in the late Middle Ages and early Renaissance as a means of lowering social barriers. Elaborate court etiquette didn’t apply, dancers could mix and flirt without violating court protocol, and even the most shy and reticent could find relief from their inhibitions.

Chat rooms and online dating sites are pretty much a high-tech reincarnation of that premise, easing initial contact and minimizing the sting of personal rejection.

Second, anonymity has a profoundly egalitarian heritage. Mechanisms of communication always have been among the first targets of tyranny since media existed. The Gutenberg Press, for example, sparked not only a revolution in the sharing of ideas and the standardization of language, but also a new obsession for identifying — occasionally torturing and murdering — the instigators of dissent. In Europe, and later in colonial America, strict licensing on the new printing press was imposed, accompanied by draconian and frequently gruesome retribution against those who used it as an instrument of protest.

Rebellious American Revolutionary intellectuals such as Thomas Paine published anonymous works, including the incendiary pamphlet Common Sense, with profound effect.

Lesser-known (at least in the United States) examples of the liberating power of anonymity persisted throughout many intellectual rebellions. In the Soviet Union of the 1950s and ’60s, the practice of Samizdat, which translates to “self-publishing,” allowed circulation of the works of dissident authors such as Boris Pasternak and Aleksandr Solzhenitsyn — writers who raised consciousness about the existence of the gulag and repression of religion.

So seriously did the Soviet Union take the threat of anonymous dissent that it required the licensing of typewriters, with the KGB retaining data on the individual typefaces of each machine so that anyone retyping works of protest could be tracked down and punished.

China, no fan of political dissent, currently requires internet users to register with local police departments, and the government frequently compels internet service providers to turn over the physical locations (generally easily traceable) of computer users who post messages of protest.

Without question, these modern incarnations of anonymous mass communication have propelled liberation in many parts of the world. Recent manifestations include nascent democracy in Myanmar/Burma, greater international awareness of human rights in China, and the domino effect of Arab Spring revolutions.

At the same time, anonymity has an ethical dark side.

It frees the commenter from responsibility for the comment both in terms of truth and motive. Anonymity on the internet produces the same freedom to harass as the telephone once offered the crank-caller in the days before caller ID. But now the scale of the potential damage is magnified far beyond the one-on-one trauma of heavy breathing or requests to check if one’s refrigerator is running.

Anonymous harassment, as well as harassment committed under a false identity, have become a modern technological scourge — bullying on digital steroids.

And anonymous postings on news organizations’ websites — once lauded as conduits for tips and expression that could revive the grand tradition of the Revolutionary pamphleteers — have become both a nuisance and a nightmare for the news industry.

Publications increasingly are banning the practice altogether, not only in response to the worthless vile and inane dreck that’s disgorged, but also because anonymous comments are used as tools for propaganda, spam, self-promotion, and vilification of political or ideological opponents. They also put the publications in legal jeopardy from lawsuits and subpoenas from those who feel they have been defamed and demand the identity of the poster.

Some news sites now require users to register with a traceable name, even though they may post anonymously. Others have developed tiered systems where those who attach names to their comments receive more prominent placement. Others have dumped the anonymous comment box altogether.

Still, isn’t there an advantage — even an obligation — for the online world to offer an outlet for comments that might damage the reputation of a business (or government) but expose wrongdoing? For ideas that buck conventional wisdom, the collective systems of belief that far too often are based on prejudice, intolerance, and the desire to retain power? For leaked information that a government may wall off as a threat to national security when it is simply inconvenient or embarrassing to the powers that be?

Where should society set the ethical boundaries of anonymity?

Actually, I don’t know. But I’m intending to exploit one of the real benefits of online comment boxes and ask you to do my work for me.

What do you think are the ethical limits of anonymity? Please post. And — ahem — do note our comment-box policies.

©2012 Institute for Global Ethics

Sensibility and Shoulder Surfing

by Ethics Newsline editor Carl Hausman

A recent story by the Associated Press set off a wave of indignation about the practice of prospective employers demanding social networking log-in information or simply asking job applicants to open their Facebook page while the interviewer “shoulder surfs” and looks for red flags.

That indignation worked its way up to the top levels of U.S. government when, last week, two congressmen introduced a bill that would ban such practices. In recent months, lawmakers in five states have introduced similar measures.

That’s how law often works: reactive and piecemeal, a finger stuck in the dyke before the next leak appears. While this instance produced a bill in extraordinarily short order, it’s likely that the technology soon will mutate in another direction, requiring yet another law.

In many cases, these laws lag behind technological change for years or decades. In some U.S. jurisdictions, for example, there’s still no applicable law to punish someone who snoops with a video camera if the video contains no sound, because existing laws apply only to the wiretapping of audio conversations. Until recently, some online bullying cases were largely un-prosecutable because no laws had been written to prohibit actions that 15 or 20 years ago were inconceivable.

And that’s pretty much what’s happened with job-interview shoulder surfing. While no one knows the true extent of the practice (the AP story was essentially anecdotal), there isn’t any law against it yet, and the decision about the propriety of the practice falls into the realm of ethics.

Is it unethical? I say yes, but first let me make a case for the practice. Many years ago, I worked for a detective agency — not working on stakeouts in a fedora and trench coat, but sleuthing around the emerging internet and other public records repositories for due diligence research on high-level job applicants, prospective CEOs, and businesspeople seeking major investments. It was a good part-time job for me and many of my chronically underemployed (or simply underpaid) journalist friends, who suddenly found our skills in sifting data and following threads of information valued by a new and lucrative type of employer.

What we turned up was often astonishing: phony degrees, past securities fraud violations, assault charges, and so on. It wasn’t always easy to track this stuff down, and I began to feel some empathy with the person who was going to write the big check and gamble on the new person.

Additionally, there are some circumstances that would seem to warrant an intrusive search. A case in Maryland, for example, involved a corrections officer who was asked to turn over his Facebook password when he was applying to be recertified for his job. The state’s side of the story involves the claim that some applicants for corrections positions have allegiance to gangs. In the Maryland case, the state further claimed that out of about 2,600 applicants for corrections positions, eight had been disqualified in whole or in part because they were observed displaying gang symbols on photos on their Facebook pages.

Still, for the vast number of job applicants, existing procedures, including standard criminal records checks, credit bureau reports, and so forth, certainly seem intrusive enough. The idea of an employer demanding log-in information seems abusive and a sidestep of existing law and procedure governing employment.

An interviewer questioning a job candidate cannot ask certain personal questions about marital status, family, and health. But by compelling a display of a Facebook page, it’s pretty easy to determine some of that: Relationship status often is displayed on the main profile, and Facebook pages certainly will be a likely venue for photos of applicants’ children if they have any.

Does an applicant “like” a local diabetes support group? Could that be a hidden determining factor in rejecting an applicant?

An acquaintance of mine once told me of a trick that one company used to identify which job applicants had children: The interviewer would walk candidates to their cars. What seemed like a mannerly gesture was, in reality, a scan for car seats in the back of the minivan.

Is an expedition into someone’s social media so different? We can’t always ascertain motives, but in a hypercompetitive search for employment, could the private details from within a social media user’s personal boundaries be a deciding, if tacit, factor in a negative decision, even if those details technically are supposed to be off limits?

I suppose anything meant for the general public — for example, any Facebook page that is completely unrestricted — is fair game. But the fact that someone has set up boundaries and demarcations in the form of privacy settings makes the difference between reading a public profile and compelling an applicant to turn over a password analogous to reading a letter to the editor in a newspaper versus demanding someone’s private correspondence.

Finally, there’s the issue of balance of power. While prospective employers can’t force anyone to open their social media pages, how many of the increasingly desperate unemployed would have the nerve to decline such a request — or the scrambled judgment to assume that their refusal won’t make any difference?

That’s my view. What’s yours? Do you think it’s ethical for a prospective employer to demand to see an applicant’s password-protected social media pages? Use the comment boxes below — which, by the way, are open for anyone, including your current or prospective employer, to read.

©2012 Institute for Global Ethics

What Makes Us Unethical?

by Ethics Newsline editor Carl Hausman

The “resignation letter heard round the world” continued to echo last week, as many in the business community and elsewhere shook their heads over the scathing piece in the New York Times in which former Goldman Sachs executive Greg Smith pilloried what he called the “toxic” culture at the firm, blaming a lack of morality for his exit.

Details of the incident and the aftermath are included in this week’s Newsline run-down, but something that might benefit from a little expansion and exploration is the concept of a “toxic ethical culture.”

What is it, specifically, in the culture of an organization that makes ostensibly good people go bad?

There are those who probe such questions. Caveat: As a college professor, I read more research studies than I care to and know that studies, especially isolated polls and experiments, rarely “prove” anything and frequently are contradicted by subsequent research.

Having said that, inquiry into the roots of unethical behavior, along with some plain old eyeball research, does identify some conditions that can cause people to run off the ethics rails.

From what I can see, ethical cultures deteriorate when:

  • We Rationalize. Doctors, one survey contends, are more likely to say it’s alright to accept a gift if the question is phrased in a way that mentions the sacrifice they’ve endured in their career — as opposed to a straightforward query about whether it is acceptable to take a gift. Another survey purported to show that people who believe they have acted with admirable ethics in one area may feel enabled to act unethically in another; the study indicated that those who applaud themselves for buying “green” products were more likely to rationalize bad behavior in other areas in a sort of “moral balancing.”
  • We pay attention to only the “shalt nots,” not the “shalts.” It’s intoxicatingly easy to think that avoiding unethical actions is the same thing as acting ethically. But there are many deceptions and prevarications that come about by not doing something. For example, some critics argue that scientists tend not to publish data that does not suit their purposes. For example, a study showing a particular drug did not perform well in a clinical trial may simply not be written up. There’s no law that requires that every research project be submitted for publication. A fisherman with a six-inch-mesh net might conclude that all fish are bigger than six inches because he’ll never see the ones that got away. The same might be extrapolated for public knowledge about many aspects of science and technology if we only are shown what researchers want us to see. The same analogy can be applied to human behavior. Some researchers, for example, claim that anti-bullying programs are largely ineffective at best because they are simply a list of proscriptions, not affirmative training in how we shall treat each other.
  • We confuse ethics with rules. Generally something is either against the rules or it isn’t. When we reify ethical behavior as something that isn’t specifically prohibited, the meaning is perverted. As Rushworth Kidder often noted, among the better definitions of ethics are “obedience to the unenforceable” and “what you do when no one is looking.” Note that some of the first dominoes in the global economic crisis fell because of actions that were reckless and irresponsible but not specifically illegal, probably because no one had thought to write a rule against such doltish behavior. A side-effect of the “ethics equal rules” syndrome is that the existence of a “code” is interpreted as self-justification of the ethical behavior of the organization. Enron, it might be noted, is said to have had an ethics code totaling 64 pages.
  • We feel superior or privileged. This is a fairly obvious one: Some people clearly believe that standards of behavior apply only to the little people. And while it’s easy to read too much into this, two interesting recent studies purport to show that rich people tend to be ruder drivers and that people who identify themselves as upper-class tend to lie and cheat because they see selfish and greedy behavior as socially acceptable.
  • Authority figures sanction our actions. This tendency was demonstrated graphically by the studies conducted by Yale University psychologist Stanley Milgram in the 1960s. Milgram found that ordinary people could be prompted to deliver what they were led to believe were agonizing electrical shocks to an experimental subject when ordered to do so by a stern and authoritative professor. In 2010, a controversial French television show basically created the same scenario, finding that people were disturbingly compliant when hounded by an authority figure and led to believe they were giving shocks to participants on a television game show.

I’m sure there are many other contributors to a toxic ethical culture; this list is clearly not exhaustive. I’d like to continue this collection but would be grateful if you’d help me do the compiling! Please use the comment area below to explore the question, What are other reasons that cause ethical cultures to go sour?

©2012 Institute for Global Ethics


The Shut-Up Switch

by Ethics Newsline editor Carl Hausman

The hottest item on Google’s list of trending topics late last week was a report from a Philadelphia television station about a man who admitted he used a cell-phone jamming gizmo to block conversations of other riders on his bus.

He told reporters that his improvised (and, it should be noted, illegal) device puts a lid on loud talkers and rude behavior. He also admits taking the law into his own hands, but declares, “Quite frankly, I’m proud of it.”

The Philadelphia bus story was one of several recent items highlighting the fact that technology can be used to stifle as well as stimulate communication. Among the others:

  • In Japan, researchers have developed an instrument that essentially stuns someone into silence by beaming the speaker’s words back at him a split-second later — a delay that short-circuits the brain’s auditory pathways and causes the person to stutter to a halt. The “speech jammer,” according to some reports, is seen as an ideal method to calm a noisy classroom or a roiling demonstration.
  • In Washington, the Federal Communications Commission announced that it is seeking public input on the proper circumstances under which the government can cut off wireless service to protect the public. The inquiry comes after San Francisco transit authorities blocked cell and internet access in some stations last August in order to quell expected organized protests.
  • In London, police have considered asking Parliament for laws that would shut down Twitter feeds in the event that authorities discover that the microblogging service is being used to form crowds and foment a riot.
  • In China, there’s been a crackdown on sales of software that is designed to circumvent the government’s content filters on the internet — filters that routinely excise material related to democracy and protest. Some speculate that that the move is motivated by the government’s fears of an Arab Spring-type political movement fueled by social media.

These developments represent a new twist to a very old ethical problem: the balance between the rights of the individual to expression and the presumed good of the majority to enjoy safety and security.

The Gutenberg press, for example, stirred up all sorts of draconian licensing laws when it became apparent that the machine could interfere with public order by inciting dissent. Several centuries later, pamphleteers found themselves in the government’s crosshairs in the early days of World War I, when someone advocating a munitions strike was charged with endangering troops abroad.

It was that case, in fact, that prompted U.S. Supreme Court justice Oliver Wendell Holmes to reason that the right to expression stopped at the point where words became weapons, such as falsely shouting “fire” in a crowded theater. The intent of such words was to harm, not to inform or express, reasoned Holmes, thus placing them out of the reach of First Amendment protections.

You could argue, and I might agree, that the issue of when it is right to ban communication is more salient than ever because the technology that we expect to enable communication can more effectively inhibit it as well.

For example, pamphleteers throughout history have been highly successful agitators because it is so difficult to root out every single printing press. Radio has for many years been an efficient medium for propaganda and counter-propaganda because portable sets were so common and radio frequencies so diverse that it was impossible to jam them all.

But electronic networks (and neural pathways, apparently) are quite vulnerable to certain types of throttling. Select portions of the United States’ internet backbone could, in fact, be shut down with a single switch.

Sound farfetched? I’m no technical expert, but presumably the U.S. government is, and there is currently a proposal regaining momentum in Congress to provide the president with a “kill switch” for various sectors of the internet.

The Lieberman/Collins/Rockefeller/Feinstein cybersecurity bill appears to be on the fast track for a vote by the full Senate. The measure’s stated purpose is to protect the U.S. banking systems from cyber-attack and to prevent cyber-terrorists from taking control of critical structures such as dams and power plants. Various civil libertarians, however, worry that such kill switch measures could be used to inhibit free speech and other liberties.

How far should the government ethically and legally be able to go in order to prevent insurrection and danger to the common good? Should it be permissible to cut cell-phone communications to short-circuit a riot? Should the president have a kill switch for the internet in the event of a cyber-attack?

I don’t have the answer, but you might. Please post your comments.

Reflections on the Passing of Ethicist and Newsman Rush Kidder

by Ethics Newsline editor Carl Hausman

I began my association with Rushworth Kidder about 20 years ago. As an ex-pat from New York City, I was delighted to start work in idyllic Camden, Maine, where Rush had set up shop in a warren of cluttered offices furnished with dented metal desks.

The small staff — I think it was four or five people then — would sit with Rush for hours and bat around ideas like cat toys. Through this process, he developed many creative approaches and innovations that became part of the foundation of the Institute’s work.

Interestingly, the assumptions that Rush posited in the early days have proved not only durable but prescient. They are astonishingly more salient today than they were two decades ago. They also have undoubtedly gained a greater measure of public acceptance.

Rush would have been the first to caution that that a statistical linkage does not necessarily translate to causation, but I’m sure the Institute played some role in raising awareness of the ideas that have become more or less mainstream today.

Among them:

  • Ethics and rules are not synonymous. In the early days, we battled the assumption that dealing with ethical issues was simply a matter of drawing up a detailed code or set of regulations that would enforce good behavior. That view fell out of favor for many reasons, but right atop the list is the global financial crisis that metastasized after a glut of greed propelled by transactions that — while reckless — were often within the boundaries of the law.
  • Technology leverages the damage made possible by unethical behavior. Rush’s earliest example of this principle coalesced when he became one of the first Western journalists to cover the 1986 Chernobyl disaster. The tragedy was, Rush discovered, not only a technical meltdown but also a moral one: Irresponsible workers had overridden alarms and intentionally jammed safety devices in order to complete an ill-advised set of tests as quickly as possible. The point he made at the time was that a handful of amoral people in charge of a powerful technological device could create disaster unparalleled in simpler times. The message didn’t always hit home among the unimaginative because nuclear reactors were not household items. But today, computers and internet access are, and the pages of Newsline are replete with examples of how digital technology magnifies the threats posed by someone who intends to bully, defame, or inflame.
  • There are global constants in ethics. In the early 1990s, we often would encounter critics who were generally well-meaning but baffled by the contention that there could be universal ethical norms. “It’s all relative,” some said. But research and a growing body of empirical evidence — observable in each day’s news — show otherwise. For two decades, Institute publications, first in print and then online, have focused on how corruption corrodes the welds of confidence necessary to bond commerce and community — in Illinois, India, and everywhere in between. Stories we’ve carried also have documented universal reverence for equity, human rights, privacy, respect for the environment, and fundamental dignity. The names and the datelines are different and those attributes are clearly valued in different proportions by whatever government or cultural institution happens to be in power, but it’s clear that globalization applies also to ethics.
  • Ethical reasoning can be learned. Ethics seminars were not unheard of before the Institute began offering them, but Rush changed the modus operandi. Instead of lists of prohibitions, instead of lectures intended to somehow browbeat “reform” into the listener — strategies that generally have a dismal rate of success — Rush showed how ethical behavior can arise from an organic series of decisions balancing attributes such as justice versus mercy or the rights of the individual versus the good of society. He developed a system that teaches not a series of dictates and recipes, but skills necessary to adapt, improvise, and nimbly navigate the ethical obstacle course of everyday life.

These are just a few of the many facets of Rush Kidder’s legacy, of course, and in the space of this column there’s not room for much more. But to paraphrase an old newsroom saying, at least it was brief and we got our facts straight. Rush would have liked it that way.

by Ethics Newsline editor Carl Hausman

I began my association with Rushworth Kidder about 20 years ago. As an ex-pat from New York City, I was delighted to start work in idyllic Camden, Maine, where Rush had set up shop in a warren of cluttered offices furnished with dented metal desks.
The small staff — I think it was four or five people then — would sit with Rush for hours and bat around ideas like cat toys. Through this process, he developed many creative approaches and innovations that became part of the foundation of the Institute’s work.
Interestingly, the assumptions that Rush posited in the early days have proved not only durable but prescient. They are astonishingly more salient today than they were two decades ago. They also have undoubtedly gained a greater measure of public acceptance.
Rush would have been the first to caution that that a statistical linkage does not necessarily translate to causation, but I’m sure the Institute played some role in raising awareness of the ideas that have become more or less mainstream today.
Among them:
· Ethics and rules are not synonymous. In the early days, we battled the assumption that dealing with ethical issues was simply a matter of drawing up a detailed code or set of regulations that would enforce good behavior. That view fell out of favor for many reasons, but right atop the list is the global financial crisis that metastasized after a glut of greed propelled by transactions that — while reckless — were often within the boundaries of the law.
· Technology leverages the damage made possible by unethical behavior. Rush’s earliest example of this principle coalesced when he became one of the first Western journalists to cover the 1986 Chernobyl disaster. The tragedy was, Rush discovered, not only a technical meltdown but also a moral one: Irresponsible workers had overridden alarms and intentionally jammed safety devices in order to complete an ill-advised set of tests as quickly as possible. The point he made at the time was that a handful of amoral people in charge of a powerful technological device could create disaster unparalleled in simpler times. The message didn’t always hit home among the unimaginative because nuclear reactors were not household items. But today, computers and internet access are, and the pages of Newsline are replete with examples of how digital technology magnifies the threats posed by someone who intends to bully, defame, or inflame.
· There are global constants in ethics. In the early 1990s, we often would encounter critics who were generally well-meaning but baffled by the contention that there could be universal ethical norms. “It’s all relative,” some said. But research and a growing body of empirical evidence — observable in each day’s news — show otherwise. For two decades, Institute publications, first in print and then online, have focused on how corruption corrodes the welds of confidence necessary to bond commerce and community — in Illinois, India, and everywhere in between. Stories we’ve carried also have documented universal reverence for equity, human rights, privacy, respect for the environment, and fundamental dignity. The names and the datelines are different and those attributes are clearly valued in different proportions by whatever government or cultural institution happens to be in power, but it’s clear that globalization applies also to ethics.
· Ethical reasoning can be learned. Ethics seminars were not unheard of before the Institute began offering them, but Rush changed the modus operandi. Instead of lists of prohibitions, instead of lectures intended to somehow browbeat “reform” into the listener — strategies that generally have a dismal rate of success — Rush showed how ethical behavior can arise from an organic series of decisions balancing attributes such as justice versus mercy or the rights of the individual versus the good of society. He developed a system that teaches not a series of dictates and recipes, but skills necessary to adapt, improvise, and nimbly navigate the ethical obstacle course of everyday life.
These are just a few of the many facets of Rush Kidder’s legacy, of course, and in the space of this column there’s not room for much more. But to paraphrase an old newsroom saying, at least it was brief and we got our facts straight. Rush would have liked it that way.
©2012 Institute for Global Ethics

 


©2012 Institute for Global Ethics


Privacy, Profit, and Purpose

by Ethics Newsline editor Carl Hausman

As Facebook readies itself to become a publicly traded corporation, it faces the prospect of unrelenting pressure to turn a quarterly profit. At the same time, it confronts close scrutiny from privacy groups and politicians over how it uses the massive troves of data that it collects from its 845 million users, as the New York Times reports this week.

Google, which already has gone public, is facing a backlash over its plans to combine data from its various services in order to wring more money out of user information. Essentially, Google wants to begin merging information from its search engine, YouTube views, and keywords identified in email to create more specific and profitable profiles. Attorneys general from a number of states have taken issue with this plan, and many privacy watchdog groups are crying foul.

From an ethical point of view, these types of privacy dilemmas are interesting because the issues created by advancing technology usually aren’t centered so much on the invasion of privacy per se but rather on the “repurposing” of information.

Most of us are willing, for example, to post that we “like” a particular product or service. But it has raised eyebrows — and some hackles — when that “like” information is not only used to steer appropriate ads our way but used in advertising, with our “like” nod of approval and our photograph inserted into an ad that is circulated to our virtual “friends.”

Repurposed information always made us a little uneasy, even before the widespread availability of computer databases. When I volunteered at a museum in the early 1980s, I was surprised to learn that the museum sold its member list to companies that were advertising products that, for whatever reason, were deemed to be attractive to museum members. (I learned this from an angry member who had deduced the source of some unwanted sales calls.) The museum also sold its list to other museums, apparently because a member of one is more likely to be sold on a membership to another.

The member’s complaint, I thought, was valid. Paraphrased: I joined a museum. I didn’t sign up to have my name and address sold to salespeople. I do, though, remember a verbatim quote from his harangue: “I want to deal with people I can trust.”

As my full-time job was, and has always been, journalism, I repurposed that complaint into articles, books, and presentations that became something of a cottage industry about information ethics. It’s a durable field because the advent of the personal computer and readily available programs to mix and match data reliably generates controversy.

In the 1990s, when computers and database software went mainstream, I began to see how repurposed information could be put to unexpectedly sinister uses. For example, an enterprising database programmer discovered that he easily could collect public-domain data on anyone who had ever sued a landlord. And for a fee — paid by a landlord — he would search his database and see if a prospective tenant was on it. While some landlords might have appreciated the repurposed information, it should be noted that there was no mention of the disposition of the suits or if they were justified.

A decade or so later, the techniques became a little more cringe inducing. Databases were established that identified people who had filed malpractice suits; these databases were intended to be used by doctors to screen prospective patients. While this may have had minimal impact on people living in urban areas, it had the potential to create considerable hardship for, say, women who lived in rural areas who needed an obstetrician.

Both practices eventually were declared illegal, at least in the states where the incidents achieved their first notoriety, but it took many years for the law to catch up to the technology.

The lag between abuse and judicial remedy can be long and even indefinite. We still don’t have definitive case law on conundrums such as:

  • How your EZPass data can be used to monitor your movements
  • The extent to which your driver’s license can be used as an instrument of social control (denying renewal of a driver’s license to someone who owes child support or library fines, for example)
  • If and how your movements, as tracked by a GPS in your smartphone, can be used in advertising to attract you to the coffee shop a block away

Today, the level of unease about privacy has reached a level unequalled in the three decades or so that I’ve been covering the issue. Regulation could be accelerated and it could be heavy-handed, making it a double-edged weapon. As much as I cherish my privacy, I also like the power of my Gmail account, the stream of news tailored to my personal surfing habits, the facility to compensate for having a terrible sense of direction by seeing my destination on Google Street View, and the ability to reconnect with 240 high school Facebook “friends,” which is probably about twice the number of physical friends I actually had back then.

It may not even require legislation to tamp down the positive aspects of new media. Consumers may even vote with their feet soon, fleeing from a cyberspace Wild West where they fear ambush from beyond the next hill.

Why could regulation and public disapproval stanch our electronic lifelines? There are two reasons. One is obvious: People are worried about repurposed and recombined data stripping their privacy and being fused into a fissionable mass of deeply personal information that turns their private lives into public commodities.

But the other and perhaps larger reason is a question of ethics: Companies trading in personal information, particularly Facebook, have subscribed for too long to the old maxim that it’s easier to apologize than to ask permission. Virtually every week we see stories (many of them covered in Newsline) about major online firms that have been forced, usually under the threat of legislation, legal action, or consumer outrage) to back away from some subterranean overreaching centered on user information.

So while I hope I don’t have my electronic tentacles cut by regulation and while I hope that I don’t have to seriously consider cutting back or ending altogether my activities on various Google services or in social media, either is a possibility.

I just want to deal with people I can trust.