Blog

Lessons from Lyin’ Brian: It’s Time to Reinvent TV News

By Buying into the Archaic Show-Business Anchor Format, We’re Only Fooling Ourselves

 

Let me make it clear I’m not defending Brian Williams. I’m just pointing out that his downfall is as much our fault as his.

Williams essentially got in lockstep with the program imposed by NBC, which relentlessly trumpeted the notion that he was a larger-than-life figure who had been there and done that, was omnipotent and omnipresent, and thus somehow infused credibility into the words he read from a TelePrompTer.

There’s no question that pneumatically inflating your persona by lying about your derring-do on the job is an automatic disqualification for a network anchorman, regardless of the intensity of the pressure to live up to the image confected by your bosses’ promotion machine.

But think about the underlying issue and our tacit complicity: When we sit down to watch today’s incarnation of TV news, aren’t we engaged in a certain suspension of disbelief right from the moment we’re galvanized by the sweeping theme music? Do we really believe that graying hair and a resonant voice impart credibility? Are we actually buying the conceit that the video snippet harvested by inserting and quickly extracting the combination anchor/star into a war zone somehow creates authenticity?

Suspension of disbelief, of course, is what show business is all about. And the current model for TV news evolved from a time when the medium had to adopt entertainment values in order to seamlessly co-exist with the rest of the broadcast day.

The golden-throated “announcer” descended from masters of ceremonies on stage and in early radio.

TV-news formats were created to quantify the amount and nature of captivating video to match the visual appeal of entertainment programming. The same formats mandated selection of stories featuring danger and conflict because they aped the engrossing and familiar structure of most drama.

Highly sophisticated and rigorously researched formats resulted in virtually identical pairings of bantering middle-aged men and young blonds stamped out as anchor pairs in city after city. Anchors were tested for likeability by playing tapes of their performance to audiences who were instructed to twist dials if the anchors made then feel warm and fuzzy.

Still, though, we were given a pretty good product. We didn’t get a nightly civics lesson, but we did get some important news packaged by talented people who generally did their best to balance the integrity of the product with the unflinching demand to attract the eyeballs of a mass, undifferentiated audience.

But that mass audience is disappearing. The audience is fragmenting among logarithmically multiplying destinations for those eyeballs.

Network television ratings have officially fallen off the cliff, and in response networks have ratcheted up by doing more of what worked in the past: trying to sell the cult of celebrity.

Today, though, the best guess among media prognosticators is that the strategy for success is to concede that you will reach a smaller audience but seek a devoted and profitable demographic and offer a leaner, more efficient product.

With that assumption in mind, I have an immodest proposal: Let’s regroup and reinvent the product. Instead of investing extravagant amounts of cash into promoting the packaged, promoted, and scientifically certified warm-and-fuzzy anchor – in hopes of saving an enterprise that’s on life-support anyway — why not hire an extra dozen competent if less-fuzzy reporters?

Instead of the D-Day scale mobilization necessary to parachute the star into the hot spot de jour, why not plow the money back into re-establishing the overseas bureaus that have been eliminated in corporate cost-cutting massacres?

If we absolutely insist on an attractive person mouthing the news, why not follow the lead of many British broadcasters and hire talented but essentially interchangeable personnel we unabashedly identify as news readers or news presenters – shifting the focus to the product instead of the personality?

I realize that everyone who fancies themselves a news aficionado advocates at least some of the above, but now is the time such an approach could actually work. It may have to.

Wouldn’t a smaller, but still substantial, audience seek out the type of product that network news organizations, with their vast non-star talent pool and distinguished institutional memory, can provide?

After all, news audiences can be catnip to advertisers. Although they tend to skew older than the coveted youthful demographics, news viewers tend to be attentive and thus attractive targets for advertisers. (News audiences are usually watching the program rather than having it on as background chatter.) Also, news audiences also are typically more affluent that viewers for other types of programming.

What do we have to lose given the inevitable fragmentation of mass audiences? Would it be such a risk for networks to exploit the true value of TV news – a window on the world that brings us reality and context — rather than to package the news as a constellation of mass entertainment revolving around a few evening stars?

The deal would have to work both ways, of course. We, as a collective audience, would have to give up the comfort of personality-driven journalism and the familiar warmth of news presented in a show-business frame.

It won’t be easy, because we like the mythology that Brian Williams tried too hard to create. Former Librarian of Congress Daniel Boorstin nailed it all the way back in 1962 in a prescient book titled The Image: A Guide to Pseudo-Events in America.

We hardly dare face our bewilderment about the gap between reality and the ginned-up media experience, he agued, “because our ambiguous experience is so pleasantly iridescent, and the solace of belief in contrived reality is so thoroughly real. We have become eager accessories to the great hoaxes of the age. These are the hoaxes we play on ourselves.”

#
Carl Hausman, Ph.D., is Professor of Journalism at Rowan University in Glassboro, N.J. He is the author of several books about journalism and media, including Lies We Live By: Defeating Doubletalk and Deception in Advertising, Politics, and the Media.

Ethical Problems in the Digital Era: Problems We Didn’t Even Know We Had

A couple years ago I was asked to give a talk to a group of professionals who work in various online media about “emerging issues in online ethics.” I was skeptical about whether the topic was too abstract to be of interest to communication practitioners, but I was wrong – I received a lot of feedback from news,  advertising, and public relations professionals people telling me that indeed these issues were surfacing in their worlds.

Here are seven emerging ethical problems, leveraged by digital technology that may soon blink on you radar:

1 Digital records can live forever, requiring proactive ethical decisions about their lifespan. As an example, think about the huge difference in impact of the digitization of a newspaper’s archives, which makes for a profoundly different ethical issue than the mere existence of some old paper copies in the storage room. There are hundreds of new dilemmas that could spring from this premise: For example, if Prisoner X is cleared of his crime by a DNA test after spending five years in jail, do we go back and correct all of the stories in the archive? Or make annotations to the old stories? Or remove the stories altogether? Even videos become immortal once they are cast around the ‘Net. A bad television commercial will be reincarnated on YouTube and other video-posting services, as will bad Karaoke recorded with a cell-phone.

2 The speed of digital communication exponentially increases the chance for error. Speed — and the expectation of speed in the age of the eternally hot connection — not only causes error but magnifies it. Add to that the pressure-cooker effect of digital communication, which affects almost everyone in every line of work. Think about how the acceleration of your working life due to the demands of your email and your Blackberry affects your performance and your state of mind.

3 Data is dangerous when unsecured, creating an ethical obligation to play it safe. Several high-profile incidents have demonstrated that plain old slipshod handling can precipitate information-age debacles. Companies have left sensitive data unsecured, unencrypted, in a cab, or on a lost laptop. And this problem doesn’t relate only to keepers of vast databases: Think about the possible consequences related to answering one of your emails if you hit the “reply to all” button when you meant to reply only to the sender.

4 We’re all engaged in cross-cultural ethics. The cliché about digital communications shrinking the world became a sudden reality in recent months when the fact that major U.S. Internet firms censor their content to gain entrée to the vast Chinese market showed up on Congress’ radar screen. By whose rules do we play when engaged in international communication?

5 We are what we link to. Online news publications, in particular, face ethical dilemmas related to linking. You may choose not to show a violent incident, or something that is patently offensive, but can you compromise by linking to it? It’s a problem that has little ethical precedent and needs some hard thought and discussion.

6 Search results are presumed to be honest and impartial, but are they? This is a complex question. Some sites’ search engines elevate certain results because they are paid to do so by advertisers seeking prominent placement. Most of the major independent search engines don’t have “pay to place” search results, but that doesn’t guarantee that their results can’t be cooked from the other end by webmasters using various tricks to score higher in the rankings. There’s a slippery slope between healthy self-promotion and dishonest skewing of search results.

7 Technology enables rampant plagiarism. The Internet is the world’s greatest copy machine, and the problem of ownership of ideas incredibly complex. If you are preparing a print ad, how far can you go in copying a design element? Is cutting and pasting a report from a mosaic of many sources still plagiarism?

Do you see an emerging problem that I’ve missed?

The Flash, Apparently Official


Nov. 18, 2014

by Carl Hausman

On November 22, 1963, Walter Cronkite faced what was surely one of the most difficult and explosively pressurized decisions in the history of journalism.

Shots had been fired at U.S. president John F. Kennedy, and Cronkite had taken the lead on the story, reading a bulletin over a CBS News graphic at 1:40 P.M. EST (an hour later than local Central time zone in Dallas), beating NBC by about a minute. ABC took to the airwaves with the story about a minute later.

The bulletin that Cronkite read said that shots had been fired at the president’s motorcade and that the “first reports say that President Kennedy has been seriously wounded by this shooting.”

From that point on, CBS essentially “owned” the story. In addition to beating rivals to the air, the network had a logistical advantage because they had several experienced reporters and a good deal of equipment deployed near the motorcade and at the Dallas Trade Mart, Kennedy’s destination at the conclusion of the motorcade.

The morning papers had been printed already, and the process of printing and distributing an extra edition would take hours. Radio had large audiences in 1963, but television had supplanted it as the cool hearth where the family gathered, and people were drawn immediately to their TVs — in homes, offices, and even in crowds around stores that sold televisions.

Television still was not taken entirely seriously as a news medium, though. People tended not to believe much until it was set in type. But on this day, that was not an option.

The eyes of the nation were therefore on Cronkite. His ethical and pragmatic decisions over the next couple of minutes would be crucial for his career, the credibility of television news, and for the nation.

Was Kennedy dead? Cronkite surely wanted to be first with the story, but he also wanted to be right, so he refrained from making a pronouncement. He didn’t say that the president was dead until 2:27 P.M., and then did so in a cautious and tentative way — relaying that Dan Rather had confirmed that the president was dead, but that there had been no “official confirmation.”

Eleven minutes later, official confirmation came in a “flash” (in wire-service parlance, the most urgent level of news report, outranking “bulletin”) from the Associated Press. Cronkite intoned the most riveting and impactful report in broadcast history:

From Dallas, Texas, a flash, apparently official, President Kennedy died at 1 P.M. Central Standard Time, two o’clock Eastern Standard Time, some 38 minutes ago.

Why did Cronkite hold off? Other news organizations had reported already that the president was dead, albeit with varying degrees of certainty, citing second- or third-person accounts of witnesses, including priests called in to administer last rites or bystanders who had overheard remarks made in the corridors of Parkland Hospital.

Part of the answer is obvious: Erroneously reporting that the president was dead would be a horrifying mistake. Still, CBS is in the news business, and the very nature of the business (as well as the linguistic derivation of the word) specifies that readers, listeners, and viewers be provided with the newest information. Viewers wanted to know and needed to know if the president had been killed.

The question became: How long could Cronkite sit on the news, and what sort of official confirmation would tip the scales to the point where he would confirm that the death was, at least, “apparently official.”

The tipping point was the flash from the Associated Press — then, as now, the news organization legendary for its accuracy, the news cooperative from which most of the business gets much of its news. Cronkite also weighed the fact that trusted, experienced colleagues on the ground had expressed their belief that the president had, indeed, died.

But there is a not-so-obvious reason why Cronkite held off. He was an experienced newsman (age 47 at the time of the assassination) and knew that initial chatter, traffic on the police scanner, and even the initial observations of reporters and officials are often confused and inaccurate.

I, like seemingly every other journalism educator and author of a certain age, wrote a book that included discussion of some incidents related to the assassination and the nature of truth-telling. I included a reflection by the New York Times‘s Tom Wicker, who noted that some initial “facts” were wrong.

There was an official announcement soon after the president’s body had been taken to the hospital, made by a White House assistant press secretary, that the president was alive. Apparently some vestige of electricity in the president’s body had fooled monitoring equipment into detecting what looked like a heartbeat.

Bombarded with conflicting information and with none of the means by which a reporter of that era would normally check things out, Wicker followed his instincts and confirmed, later in the day, that the president was indeed dead.

Reporters who covered the assassination or who were peripherally involved with the relaying of reports from the scene have told me that Dallas on the day was crackling with rumors and speculation, including reports that Lyndon Johnson had been killed and claims that the shooting had been linked to an attack by the Soviet Union.

But wild speculation generally did not receive widespread circulation, and while there were many errors in fact and interpretation on November 22, 1963, and in the days that followed, the coverage of the death of President Kennedy is regarded as the signature event that helped television to evolve from a medium that was not taken very seriously into the primary source of news for the U.S. public.

So, how would a similar scenario play out today?

Don’t worry; this is not going to be a sermon canonizing the good old days and vilifying the new media. Any serious student of the news business knows that there have been errors and transgressions throughout the history of the business and even “Uncle Walter” was no saint. In his early years as a reporter in Houston, for example, he had broken into a home to obtain a photo of a crime victim. (It turned out that he had burgled the wrong house, incidentally.)

Instead, I would simply like to point out how the ethical issues surrounding breaking news have been magnified by technology and by the increasing “reporting” by people who are not news professionals (and who have little to lose if they relay incorrect information). Moreover, the modern lack of a publication schedule or a newscast that airs only at 6:30 P.M. deprives even the most conscientious journalists of the time to confer, weigh, and evaluate information.

Over the past decade, there have been instances in which the rush to be first has resulted in news organizations getting it wrong. In 2006, for example, misinterpretations of a sting of cell-phone messages, announcements, and even a misunderstanding of what the pealing of distant church bells meant led many in the mainstream media to mistakenly report that a dozen men had survived a mine disaster in Sago, West Virginia, when they had not.

In 2012, two major cable networks erroneously reported that the U.S. Supreme Court had struck down the individual mandate in the Affordable Care act.

Social media force-fed erroneous reports about suspects in the 2013 Boston Marathon bombing through the internet and TV. One young man, falsely reported to be a suspect, actually turned himself into police to clear his name and ostensibly to save himself from misguided vigilante justice.

IGE founder Dr. Rushworth Kidder once told me that he was largely motivated to study and reform ethics after witnessing the way that technology, in his words, “leveraged” the impact of ethical malfeasance. He often referred to the 1986 Chernobyl disaster, in which technicians had defied protocol, ethics, and common sense by disabling safety shutoffs so they could continue with an experiment.

What would have been a relatively minor issue in a power-generating plant from a previous generation thus became a disaster.

Do social media and real-time news have the potential to create what we might call an Information Chernobyl? In the modern connected world, how do we set ethical standards for reporting information that has not been verified officially, especially in times of disaster or insurrection?

It’s tempting to say that “nothing should be reported until it has absolutely been proven true” but that is, of course, an impossibility. A news medium that sits on a story until every detail has been verified by official sources is not in the news business at all. It would perhaps more accurately be described as being in the business of writing history books, or even in the business of stenography, because official sources tend to release news in a way that benefits official sources.

What’s your view? If you had to write a code of ethics for news organizations, what would be in your section about truth-telling and verification in fast-moving, breaking events?

 

Electronic Eyes and Ethics

By Carl Hausman

Imagine some everyday events in this possible future world:

On your way to work you stop to visit your elderly father, who has developed dementia and has wandered off in the past. You activate a tracking device that will monitor his movements and alert you if he leaves a predetermined area.

Next on your itinerary is the bus stop, where you drop off your daughter. Her student ID card is scanned to ensure that she will be tracked getting on and off the bus and entering the school building.

Shopping is next on the schedule. You visit your favorite clothing store, where cameras mounted in the ceiling analyze your facial features, call up records of past purchases based on the identification, and activates a video placard displaying an ad for a product a computer program thinks you are likely to buy.

As the day winds down, you stop in at your favorite restaurant. You strike up a conversation with an attractive person at the bar — and utilizing the camera in a computer shaped into a pair of eyeglasses you use a facial-recognition program to assess that person’s identity and run a database search on all information available on that person; you view the results on a screen mounted an inch from your eye.

While the last two scenarios above are at the edge of feasibility — perhaps months away — the first two are present reality. In the U.K, many localities offer handheld or wearable tracking devices for the elderly. Recently a local police force there offered to pay for the devices, saying they would save thousands in funds needed for police searches.

An in Texas, a San Antonio school district has already implemented tracking devices to make sure students are on board their buses and within school grounds during the school day.

As for the other scenarios: Facial recognition technology is not widely used in stores, but it is in beta. A Denver company currently markets biometric technology that determines the sex of a customer, and has sold the devices to stores and casinos. Casinos in western Canada, according to a report from the Denver Post, are now using the systems to properly set the ambience of various parts of the casino floor for men and women.

Google Glass is also in the development phase, but the wearable computer is equipped with a camera and it’s not such a stretch to imagine that the device could eventually be adapted for facial-recognition technology.

The ability to physically track people has increased exponentially in a decade. The technology has advanced like a gazelle, but the law, which typically has great difficulty catching up to technology, has lumbered like an elephant.

As we have regularly reported in these pages, laws are inconsistent or non-existent for applications of such technologies as drone surveillance from aircraft, or availability of travel records recorded by tracking devices inside cars (for collecting tolls or for monitoring driving habits).

As such, much of the way we sort this emerging problem out will be based on ethics, and what laws that do emerge will likely be based on the ethical consensus we reach and, of course, reactive outrage at those who flagrantly abuse the technology.

The ethical approaches are complex because, as Dr. Rushworth Kidder pointed out in many of his writings, true ethical dilemmas are often choices between alternatives that each present a “good” or a “right” aspect.

One “right” is fairly obvious in monitoring dementia patients: Their safety is undoubtedly enhanced. But there are alternative objections that take into account patient dignity and quality of care, also prima-facie “rights.” Some critics want limits on such devices, saying they should not be used without proper informed consent (prompting some agencies to offer the devices as early in the process of the dementia’s path as possible) or simply to save money. Others worry that the devices could result in a lack of human contact, and that computer chips could be improperly substituted for caregivers.

Similar concerns have been voiced over the ethics of physically tracking students through computer chips implanted in their ID cards. Some question whether enhanced safety is worth, for example, conditioning youngsters to being electronically surveilled.

Is it ethical to use identification technologies to sell products? We all value, to differing extents, our privacy. On the other hand, we are often willing to trade that privacy for convenience, including when shopping. While it’s a little unsettling to me to have my reading habits monitored by Amazon and my viewing routine scrutinized by Netflix, I do enjoy the endproduct: a steady flow of content suited to my tastes. It’s not such a stretch to think I might sanction a computer recognizing me if it speeded and enhanced my experience in a clothing store.

And what of Google Glass? It doesn’t take much reflection to appreciate how primitive life seemed without a smartphone: No text messages, no always-available camera, no pocket-size navigation system. But privacy concerns remain, and while ubiquitous nature of cameras has, perhaps, not proved to be the problem we once anticipated it would be, the possibility of intrusive and secretive photography will certainly be augmented when a camera doesn’t have be be hand-held and pointed.

One of our top stories in this week’s news section highlights a lucid assessment of the ethical right-versus-right dilemma involving surveillance. Boston’s police commissioner Edward Davis, noting that surveillance cameras had been invaluable in capturing suspects in the Boston Marathon Bombings, warned against abuse that would result in a “police state” environment, “We do not, and cannot, live in a protective enclosure because of the actions of extremists who seek to disrupt our way of life.”

What’s your view? Where, in your opinion, do we cross an ethical line when we employ surveillance and tracking devices? Enter your thoughts in the box below.

 

A Graphic Reminder

By Carl Hausman

Here’s an ethics quiz. You’re on deadline and you have to make a decision instantly.

You are editor of a newspaper, and have a photo taken seconds after a bomb exploded near the finish line of the Boston Marathon.   From a journalistic point of view, it tells the story brilliantly: In the foreground, a stunned runner attempts to get up, bewildered.  In the background, sprawled on blood-smeared pavement, is a man whose leg has been blown apart.

There’s no question it’s great journalism, but is running it great ethics? What do you do?

a. Run it as it stands.

b.Use it but stick it on an inside page, perhaps with a warning on the page-one story that the photo is graphic and disturbing.

c. Alter it by using PhotoShop to disguise the gaping wound.

d. Don’t use it at all.

Each avenue has en ethical fork.  Running it without alteration  serves an ethical good by providing unvarnished truth to the public, giving your readers an informative, unvarnished view of what’s happening.  The counter-argument is that there are many among your readership who will be horrified by such images.  Your paper is seen by all sorts of people, of all sensibilities, including children. Moreover, the counter-argument goes, we need no graphic reminders that having a bomb explode near one’s legs produces terrible injuries.  We don’t routinely take closeups of all sorts of horrible events, so showing grisly photos is needless and exploitative.

Using in on an inside page ameliorates, to an extent, the problem detailed above.  There are various ways of accomplishing this in different media.  Some broadcasters covering the bombing, for example, warned that what upcoming footage would be disturbing. The Atlantic ran on its website a photo of a man in a wheelchair clearly showing his leg blown away with scraps of bone embedded in the wound.  Viewers were required to to click on a warning page before viewing the photo.

But some critics would contend — particularly in the case of linear broadcasts viewed live — that such an approach is more eyewash than public service, as it is unlikely that people are really going to shield their eyes or jerk their children’s heads away from the screen.

Modern image-editing software can easily obfuscate the more graphic sections of a photo depicting carnage.  On the plus side, the overall reality — the larger picture — can be shown while the image is rendered with some sensitivity. Such was the controversy the argument used by the New York Daily News when altering the leg of the victim to look essentially normal.  But some ethical principles — including codes of ethics for news photographers — hold that it is wrong to manipulate a photo to change its content.

There is sound logic to that non-consequentialist approach.  If photos are routinely altered, the viewer loses perspective on reality, and the potential for deceit is obvious.  However, the opposite argument can be quite convincing when you acknowledge that any form of photography involves some sort of manipulation of the image — even something as basic as how close you stand to the subject or how much you zoom in.

Would ethics be better served by passing on the photo? An ethical good is clearly served by not confronting readers and viewers with a gruesome image.  As to the argument that the photo reflects reality, let me restate that there are many gruesome realities we do not routinely show — mangled bodies after auto accidents, burn victims after fires, close-ups of soldiers dead on the battlefield.  But the counter-argument is also compelling, holding events such as the Boston Massacre Bombings are clearly of dire public importance, and the public needs to be informed in a straightforward manner.

The take-away from this decision is that there is no “neutral.”  Both showing and not showing produce some sort of result.

Franklin Roosevelt confronted such a bifurcated problem in World War II. In the early years of the war, he took the standpoint that it would be wrong to show pictures of American war dead and ordered them censored.

But in later years, concerned that American support for the war effort might be flagging from exhaustion, he ordered that shots of dead soldiers and Marines be released to the press, hoping to rekindle outrage that stoked the massive materiel machine at home.

Remember, at the beginning of this commentary I warned you this was a quiz.  What would you do, and why?  If you want to see the photo I’ve alluded to, both in original and altered form, see the New York Times website:

http://www.nytimes.com/2013/04/18/business/media/news-media-weigh-use-of-photos-of-carnage.html?_r=1&

Please enter your choice and defense of your action in the comment box below.

The Mirror with a Memory

By Carl Hausman

For some people, the results of a Google search have become a potentially life-altering event. Past indiscretions may live forever — or at least make the immediate future uncomfortable. A Mashable technology writer reports that he’s currently attempting to help his son, a junior in high school, whose name was picked up by a local paper after a minor police citation. His interaction with the court system now appears near the top of Google search results, “immortalized” for college admissions officers — about 35 percent of whom, according to a 2012 survey from the Kaplan test preparation firm, say they discovered negative information online that damaged students’ chances of getting into college.

Mistaken identity can lead to confusion or worse. In some cases, the uncertainty is benign. A search for my name can lead you to: a teenage journalist, unrelated to me, who covers kids’ issues for a New Jersey website; a philosopher, also unrelated, who writes books that I clearly could not write or understand (see the title, Pragmatism Considers Phenomenology); and a piano player of some renown. Such confusion is probably a net benefit for me, leading the curious to think my life is far more interesting than it is because I am adept at skateboarding, analyses of phenomenology, and jazz riffs.

But if mug shots of a bank robber with an identical name surfaced near the top of the results, I likely would not be so complacent

Perhaps most ominously, though, an internet-savvy tormentor with a grudge can shape public perception of you on a worldwide stage. A recent book by James Lasdun, for example, recounts the novelist-professor’s struggle with an apparently emotionally disturbed woman (at least in his version) who attacked him as a racist in anonymous Amazon reviews, altered his Wikipedia page, and dropped various accusations of plagiarism and sexual indiscretions — and, in the words of Bloomberg reporter Sarah Hepola, created “a foggy and paranoid list of grievances that’s ingeniously hard to combat.”

At the core of internet reputation dilemmas are various legal issues that could be impossible to resolve due to First Amendment protections as well as ethical issues that probably won’t provide a definitive solution either but at least can guide the debate over how we handle a medium in which information can live forever and is accessible anywhere.

Among them:

  • What duty does a reputable internet user have to “unpublish”? Should a newspaper delve into its online archives to eradicate information about criminal charges if the case against the defendant turns out to have been based on faulty information? If a court decides to expunge a defendant’s record, should a news organization follow suit and remove all stories about the original conviction?
  • Along the same lines, is there a statute of limitations on indiscretions? Fifteen years ago, if a young reporter was found to have plagiarized parts of an article, the typical punishment might have been a suspension and a citation about the plagiarism in the paper along with an expression of editorial regret. But 15 years ago, the archives of a paper were on dusty shelves in a basement; today, such a notation might appear near the top of search engine results for years, becoming something of an electronic — and indelible — scarlet letter. Should there be a time period after which certain information is removed?
  • Do young people deserve special protections against social media exposure? As absurd as it might appear, some of the most visible media coverage of the upcoming spring break season is advice on how to avoid bad behavior that can produce a long-living stigma. As author and parent advocate Sue Scheff warns in the Huffington Post: “It’s necessary for us to not only talk to our kids about the dangers that they face online, but also to remind them of the importance of their virtual manners. What our children need to understand — especially teens that will be applying to colleges and their first time jobs — is that their online image is just as important as their parent’s or any other adult’s in business today.” Do social-media sites thus have a duty to protect young people from themselves? Should such sites take a more active role in scrubbing potentially compromising photos not only from the user’s site but also from places where it is reposted?
  • Should reputable — or even disreputable, for that matter — sites crack down on anonymity? While anonymous postings can afford laudable protection for outing of the truth in the face of possible retribution, they also allow for the reverse: retribution that has no linkage to the truth. Without accountability, anonymous forums can produce virally libelous material that spreads and mutates beyond control.

Perhaps technology might produce at least a partial answer to some of the problems related to online reputation. Several firms are offering applications that allow photos and emails to self-destruct on a timeline determined by the sender — anything from days to seconds. The goal, of course, is to prevent material from being reposted and forwarded until it spawns viral infamy. (If only real life came with a self-destruct command for past indiscretion!)

What’s your view? Are there any ethical ideas or approaches that you use to tame a medium that has become a “mirror with a memory,” an appellation originally applied more than a hundred years ago to the nascent art of photography?

 

 

A “Real” Dilemma

By Carl Hausman

Was pop singer Beyoncé was lip-syncing the National Anthem during the inauguration of President Barack Obama in January, 2013?

Basically, the facts of the case are these: Following a flawless presentation of the National Anthem, the internet became abuzz with speculation that Beyoncé actually had mouthed the words to a pre-recorded vocal track.

Accounts differed, and comments from those close to the story changed as the coverage developed. Beyoncé herself has, up to the time this column was written, remained mum.

While some involved in the presentation originally said it was lip-synced in toto, some of those stories were later changed to indicate that only parts of the presentation — segments of the instrumental backing — were pre-recorded.

Meanwhile, the story grew legs, as we say, and as of late last week, acoustic engineers were analyzing recordings of the event (with mixed verdicts as to whether it was live) and musicians with weighty resumes were weighing in about the propriety of substituting lip-syncing for the real thing.

Some news commentators, including this one, note that it’s ironic that this particular controversy emanates from an event that was essentially confected for ceremonial purposes (Obama was actually sworn in the day before) and featured an inaugural address that, if it followed the trend of most presidential addresses, was undoubtedly ghosted at least in part by professional speechwriters.

Happily for ethics columnists (including this one) it’s a classic “line-drawing dilemma,” a perfect vehicle for debate.

On one hand, we certainly tolerate some level of technological intervention in a musical performance, although sometimes grudgingly. Purists are sometimes upset by the use of microphones in live stage plays, for example, arguing that tradition dictates that the performer’s unaugmented voice should fill the hall. (This, of course, begs the question of whether designing a theater for vibrant acoustics is somehow “cheating.”)

Many musical groups use pre-recorded tracks to provide instrumental reinforcement to the background. In one of my previous failed careers, music, I would sometimes use a pre-recorded track from an instrumentalist who wasn’t actually in attendance or use an automated rhythm beat from an electronic piano.

This didn’t seem like cheating to me and probably doesn’t to you, either, especially in light of the fact that no one paid money to see my group play (nor in a just world should they have). But what if you paid several hundred dollars to see a Broadway musical and later learned the entire event was pre-recorded?

How would you react if you paid to see a wildly popular group and learned — when a background record actually started skipping while they were on stage — that the singers were merely mouthing the words? And if you learned upon further investigation that the vocals on their platinum-selling albums actually had been performed by other singers?

As you may remember, that scenario is true: A duo named Milli Vanilli, one of the most popular acts of the late 1980s and early 1990s, was outed when their lip-syncing track began to skip and they scurried off-stage. Later, their Grammy Awards were rescinded when other singers admitted they’d fudged the vocals on the albums, and the group was hit with dozens of lawsuits from angry purchasers who claimed they had been defrauded.

In addition to a line-drawing puzzler, the Beyoncé controversy falls neatly into what Rushworth Kidder characterized as a “right versus right” dilemma — one in which actions on both sides of the issue represent an honorable motive.

It is clearly “right” to want to provide the best possible performance for the audience and to lend dignity to an important event. Outdoor performances are notoriously tricky, with cold air producing bizarre acoustical effects and the chill taking a toll on the performers’ vocal cords and the tuning of the instruments.

But a critic of the alleged lip-syncing could reasonably argue that if the entire song were mouthed the audience was being deceived, and that an event such as a presidential inauguration is an artifact of history and thus deserves the patina of authenticity.

Authenticity is a not-unimportant issue in media, particularly in news and depictions of events purported to be “real.” When videotape was first made practical for use in television broadcasting, networks imposed strict rules about announcing that portions of the program were pre-recorded because of fear that viewers would be deceived about time and place.

While that approach seems quaint by modern standards, it proved to have some merit. Various controversies have surfaced about the “reality” of news, including protests over the use of “cutaway” shots by TV reporters who re-ask questions after the interview so that a one-camera recording can look like it was made with two cameras and a “reverse” shot of the reporter can be used to facilitate editing.

What’s wrong with this approach? For starters, some reporters were re-asking the questions (sometimes after the interviewee had left) in a hostile and aggressive manner, making it appear that they had diligently proved their investigative-reporter mettle by prying the information out of a subject who in fact had offered it quite willingly.

News organizations also came in for some heat when it was revealed that it was common practice to use videotape “as-live,” as the practice is called, meaning that the live anchor would pose a question to an interviewee and a pre-recorded tape would be rolled in response, making it appear that a pre-recorded piece was part of a live conversation.

(Why would a news organization do this? In the news business, “live” coverage is a coveted commodity, which is why we commonly are treated to the spectacle of astonishingly expensive satellite news vans relaying us breathless “live” reports that it’s snowing.)

While “as-live” coverage could be benign, it’s not hard to imagine a temporal shift in late-breaking news that could inadvertently create a serious distortion, so recreation of interviews began to fall out of favor.

So did the practice of local entertainment reporters staging what appeared to be person-to-person interviews with movie stars that actually were confected out of pre-recorded tapes circulated by the studios for the express purpose of constructing convincing but phony interviews, with holes left for insertion of a local reporter’s question.

And, of course, much “reality” television today isn’t real at all. Some shows that look real — right down to shaky hand-held camera shots — are recreated in whole or in part, “based on” what we are told in the fine print during the closing credits are “actual events.”

The ethical implications of the manufactured event are not always trivial. Misconception and misapprehension can have profound effects, and while the alleged lip-syncing of a song is not particularly important, it is an interesting pin-point on the continuum of fabrication of public life and public knowledge.

Former Librarian of Congress Daniel Boorstin crystallized this concept in a 1961 book that is probably more relevant today than when it was published. In The Image: A Guide to Pseudo-Events in America, Boorstin speculated that not only are synthetic events easier to produce, we also seem to like them better.

The average citizen, Boorstin wrote, “lives in a world where fantasy is more real than reality, where the image has more dignity than its original. We barely dare face our bewilderment, because our ambiguous experience is so pleasantly iridescent, and the solace of belief in contrived reality is so thoroughly real. We have become eager accessories to the great hoaxes of the age. These are the hoaxes we play on ourselves.”

An interesting thought in an era when it’s not uncommon to see concert-goers transfixed by the image they are recording on their cell phone during a live concert even though the real performers are standing mere yards away — lip-syncing.

What’s your view?  Is a seemingly minor contrivance such as lip-synching unethical?

 

The Top 10 Ethics Stories of 2012

The New Year brings opportunities for reflection, planning, and trotting out the tried-and-true columnist’s formula of picking the year’s top stories related to the scope of the publication.

Here are my choices, ranked not necessarily by the importance of the story, per se, but rather by how the story influenced the public’s perception of ethics and how the event was viewed by the mainstream media as an issue centering on moral judgment.

Here we go with rankings from least to most important:

10. The Libor scandal. This story involved lending institutions rigging a key interest rate, the London Interbank Offered Rate (Libor), in order to artificially boost profits. It had particular impact in the United Kingdom, where it is reported that many customers — fed up with what they view as an institutional culture of cheating — are moving their funds to banks that promise better stewardship.

9. The New York Post‘s photo of a doomed man about to be hit by a subway train. The disturbing cover prompted ethics debates in newsrooms, blogs, front pages, and television talk shows worldwide, centering on whether it was a journalist’s job to help or observe and how it is common for groups of people to assume that someone else will help in a crisis.

8. The “moral hazard” of rebuilding vulnerable areas after the devastation of Hurricane Sandy. “Moral hazard” — the term applied to rewarding risky behavior by providing a bailout — has commonly been applied to lending institutions. In the aftermath of Hurricane Sandy, though, many political leaders began discussing, in ethical terms, the dilemma of committing government funds to rebuilding areas likely to be damaged again.

7. Corruption in China. 2012 saw a series of stories centering on politicians who fell from favor because of their association with graft, as well as the Chinese populace’s impatience with an ingrained culture of corruption in the body politic. Of particular significance was the central party’s grudging willingness to at least partly acknowledge public outrage over the issue and oust minor (and some major) officials caught with their hands in the till.

6. Negative campaigning during the 2012 presidential race. Polling data showed weighty discontent with negative campaigning and indicated that many thought that the quality and usefulness of political advertising continued to deteriorate. Such stories were perhaps indicative of a larger ethical issue — the increasing polarization and intransigence of political parties.

5. Inflammatory media and the right of free speech. The Innocence of Muslims, a virulently anti-Muslim propaganda film, prompted both massive demonstrations and physical violence as well as a moral debate over the limits of tolerance. Reported in stark ethical terms was the dilemma of whether nations and media that defend free speech should draw the line at material that is likely to incite outrage and violence, and if so, where such a line should be drawn.

4. The resignation heard round the world. Former Goldman Sachs executive Greg Smith quit in very public fashion, publishing his resignation letter in the New York Times and accusing his brokerage-house employer of manipulating clients, who he said were derided as “muppets” (British slang for a stupid person) in internal emails. While Goldman initially brushed off the development, it later said it took such matters seriously and began an investigation. While the daily news menu through all of 2012 was replete with items about financial shenanigans, the Smith letter seemed to clarify and condense public anger about not only financial misdeeds, but also about a toxic culture that Smith claimed fueled investor abuse.

3. The Petraeus affair. While sex scandals are commonplace media fodder, this one ignited an intense debate centering on the ethical implications of power and privilege. It generated an ethical earthquake throughout the U.S. armed forces, which have instituted several training programs for officers about the moral responsibilities that come with power.

2. Tie: The News Corp. scandal and the aftermath of gun violence in the United States. Both stories had powerful angles in the weeks of analysis that followed breaking developments. In the case of the News of the World‘s hacking into the voicemail of a murder victim, the incident took on astonishing prominence as a government commission attempted to balance the need for a free press with restraints on unethical behavior. Ethics issues also were at the forefront in world-press analyses of shooting sprees in a Colorado theater and a Connecticut school. Beyond the obvious questions dealing with gun control, the shootings prompted ethical disputes and dialogue over how the mentally ill should be handled by the justice system and the ethics of reporting on such a sensitive subject.

1. Penn State. It’s hard to think of a case in my two decades of writing and reporting about ethics that invoked such a stark level of moral introspection. The events surrounding the child sexual abuse carried out by an assistant coach showed the corrosive effects of a culture of corruption and privilege, in which members of an important, powerful, and profitable sports program shielded the perpetrator. In some cases, the shielding may not have been direct: Rather, it involved passing the buck up the chain of command and assuming that someone else would address the problem — a retreat into the blindness of bureaucracy. No recent case, in my opinion, more starkly highlighted the need for moral courage on the part of individuals.

What’s your opinion? Are there any stories you think should have been included in 2012′s Top Ten? Please let me know in the comment boxes below.

There’s obviously a lot of room for opinion here, so let me answer back in advance with a response variously attributed to the correspondence of Edward R. Murrow, Wolcott Gibbs, and H .L. Mencken, all of whom are reputed to have answered all disputes with a form postcard saying, “Dear Sir or Madam: You may be right.”

©2013 Institute for Global Ethics

 

On the Record About Going Off the Record

by Ethics Newsline editor Carl Hausman

This week’s edition of Ethics Newsline contains a reference to a Washington Post story quoting anonymous sources who say that Paula Broadwell, whose affair with former CIA chief David Petraeus led to the legendary general’s downfall and had accessed classified documents with Petraeus’s permission — if true, a sharp reversal of what both parties have been maintaining publicly.

Last week’s edition of Newsline contained another story concerning the Petraeus scandal: a denunciation by an Emerson College journalism professor of the Boston Globe‘s use of anonymous quotes by sources identified as Harvard professors. The professors characterized Broadwell as a shallow and self-aggrandizing student during her studies there.

Jerry Lanson, a former newspaper editor, characterized the Globe‘s anonymous use of the quotes as “the ugly practice of protecting anonymous cheap shots.” Lanson also questioned one of the source’s intent: “And just what was the professor’s motive? In trying to discredit Broadwell as a student, was he (or she) also attempting to distance the hallowed halls of Harvard’s Kennedy School from a fallen alum…. None of these questions would be relevant had the Globe rejected such a flimsy pretext for anonymity.”

What accounts for this publication’s seemingly schizophrenic ethical view of anonymous sources — carrying a piece decrying them one week and a story employing them the next?

Good question — and a tough and durable one. Probably no ethical issue has been more prominent in the history and practice of journalism, and it’s an eloquent example of not only why ethics are important, but why it’s important to talk about them in public.

First, my obligatory lecture: Use of anonymous sources is a clear example of the classic ethical balance between consequentialism (i.e., the ends justify the means) and non-consequentialism (i.e., motives, not results, are the important motivating factors in ethical decisions because you can’t predict consequences and pretending that you can gives you license to be dishonest).

The consequentialist argument about anonymous sources is a powerful one, and usually begins and ends with Watergate, a story heavily reported through anonymous sources because those speaking feared for their careers and possibly their freedom and — if you believe a detail reported by Woodward and Bernstein, who said a Nixon operative had threatened to assassinate a reporter — their lives.

Defenders of anonymous sources note, with reasonable certainty, that the story of the criminal conspiracy emanating from the White House could not have been gathered had sources not been kept off the record.

(Note, by the way, that major corporations and the government allow and sometimes reward anonymity by offering anonymous tip lines for those who fear retaliation — another story covered from time to time in this publication.)

But non-consequentialists volley back with some persuasive points. Former USA Today publisher Allen Neuharth at one time essentially prohibited the use of almost all anonymous sources, but admits that the paper relented in the face of competitive pressures — and as a result became embroiled in a mortifying scandal when it was discovered that one of its star reporters used the cloak of anonymity to print colorful, evocative, and totally fabricated quotes.

Some non-consequentialists argue that you can get virtually anything on the record if you try hard enough. One of the most famous examples was a story published by the Pittsburgh Press in the 1980s that dealt with the buying and selling of human kidneys. Co-author Andrew Schneider told a journalism review that he chose that route because he wanted a bulletproof piece: “It’s really hard to talk about fictionalizing something or taking it out of context when you’ve got a couple of hundred doctors, nurses, procurement people, and donor families all talking [on the record] about the issues at hand.”

Those who oppose the use of anonymous quotes also point out that cloaking a source greases a brisk commerce in falsehood. For one thing, a source who is not accountable has little investment in telling the truth (or at least the whole truth). Second, the source may be floating misleading information just to test public reaction and subsequently disavow the story if the information is negative — the so-called trial-balloon strategy.

So it becomes apparent that whether to use anonymous sources can be, like most other ethical issues, a “right versus right” decision since both sides can present a clearly credible case. The balancing point is determined by weighing the ethical ramifications on both sides of the scale. Generally, most news organizations that have thought through the issue have policies that deal with the process in some detail.

Many codes of ethics require that an editor know the source and the source’s credentials (and motives, insofar as motives can be known) before anonymity is granted. The Washington Post‘s guidelines on anonymous sources, which run over 3,000 words, specify that at least one editor know the source and “jointly assess” the decision to grant anonymity with the reporter and, presumably in some cases, other editors. Other codes proscribe granting anonymity if the information is self-serving, obvious, or — as specifically prohibited in the Los Angeles Times‘s guidelines — an “ad-hominem attack.”

Many codes require that reporters and editors do their best to independently verify information that comes from an off-the-record source, ascertaining not only its veracity but whether it is exaggerated or biased.

Also, it is common for codes to insist not only that the source have a compelling reason to remain unidentified but also that the reason be explained to the reader or viewer. A recent study (in 2008) showed that about a quarter of news stories offer some explanation of why sources were granted anonymity, up from about 10 percent in the 1998.

The Washington Post piece cited in this week’s Petraeus/Broadwell report states that the sources were granted anonymity because “the inquiry is ongoing,” a plausible if not especially thorough explanation. Note, too, that just because codes are in print does not always mean they are adhered to, as Washington Post ombudsman Andrew Alexander uncovered in two columns.

But clearly there’s a trend of transparency — or at least the appearance of transparency — in the granting of anonymity. Are the safeguards described above adequate? What’s your view about anonymous sources? Let us know in the comment box below. (And by the way, note that our policy requires that you provide and display your name.)

©2012 Institute for Global Ethics