Friday, April 29, 2011
On December 16, 1773 a crowd estimated at 7,000 people gathered at the Old South Meeting Hall in Boston. It was the beginning of a protest that culminated, later that night, in the famous Boston Tea Party. Historians are still trying to figure out how they pulled it off without Facebook.
The point is that they did it, and so did the Parisians who showed up at the Bastille in 1789, the Iranians who rose against the Shah in 1979 and the Chinese who arrived at Tiananmen Square in the spring of 1989. When something’s afoot, the word always seems to get out.
Human beings are a communicative species, and the hunger for news seems to be hard-wired in us. If you look back at the racist Hollywood movies of the 1930s to 50s, the Native Americans were always using smoke signals and the tribal Africans were beating on tom-toms. Everybody was staying on top of the new development in the neighborhood.
All of which makes me tear my hair when I come across the breathless news accounts of how Facebook made the popular uprisings in the Middle East possible. Doesn’t anyone remember anything about anything any more?
Facebook is a tool for communicating, and in some ways apparently a better one than what came before. Arguing against it is like arguing that we should still be using messengers to hand-carry notes when the telephone is widely available. Still, the tool is the servant of the craftsman who uses it, and it’s merely being used to do the same work as before, only more efficiently.
Tools are also amoral, and that fact makes me a skeptic toward sweeping claims that the Internet and all its features, such as Facebook, will make the world a better, more open place. They probably will in some ways, but China seems to be doing a pretty good job of blocking out what it doesn’t want its citizens to see.
This week at my Rotary Club, I tried an experiment. We “fine” members of our club for things they’ve done in order to raise money for the schools and youth programs we support. The fines, which run from two to twenty dollars, are a good-natured affair, and the Detective who imposes them typically gets information from the people themselves, their friends in the club, or newspaper articles.
It was my turn as Detective this week, and for the first time, I used Facebook as a source of material for fines. There was no shortage of it, even though I limited myself to people in the club who had asked me to be their friend. People were fined five dollars each for such things as liking a youth boxing club, serving wild boar cheese sausage at a barbecue, and dining at a nice restaurant.
When I announced at the beginning that I would be doing fines based on what people had posted on Facebook, I could see a few worried looks in the audience. It had probably never occurred to some of these successful professionals that the material on their Facebook page could be used against them in any way.
So think about it for a minute. If I could use Facebook to do this to my friends, what could a ruthless dictator, with a few well-paid computer professionals on his staff, use Facebook to do to his enemies? It wouldn’t be any tea party.
Tuesday, April 26, 2011
Is there anything more pointless and depressing than public opinion polls about the popularity of prospective presidential candidates this far in advance of the election? Most of the country isn’t thinking about it at all, and even the political consultants don’t know yet who their hookup will be.
We don’t even know what the 2012 election will be about. At this point in 1979, the Iran hostage crisis was six months in the future and totally unexpected. At this point in 2007, the near-collapse of the financial sector was more than a year away.
Sometimes those changes of circumstance can turn the right candidate into the wrong candidate. At this point in 2008 I thought John McCain was the Republicans’ best hope because his maverick reputation distanced him from the Bush administration and his age and experience would resonate well in contrast to Obama’s youth and slender resume.
That analysis turned out to be completely wrong, but at a time when the economy seemed OK and most voters would have identified Sarah Palin as a celebrity chef, it made sense. Instead, McCain’s choice of her as a running mate made him look like a man with no judgment, and his inability to respond emotionally or intuitively to the financial crisis sealed his fate. Though I never would have said so six months earlier, Ron Paul or Mike Huckabee would have been better nominees, given how things turned out. They, at least, would have known how to demagogue the economic meltdown.
Like games in team sports, close elections are often decided by matchups. If Bill Clinton had been able to run for a third term, George W. Bush could never have beaten him. Bush’s warm, easygoing personality was a good matchup against Al Gore’s cerebral coolness, and it allowed him to win an election he shouldn’t have — in more ways than one.
So, assuming they’ll be running against Barack Obama, which of the Republican candidates is the best matchup? Hard to say. Someone calm and reassuring probably can’t beat Obama at that game, and someone strident and full of conviction would have to hit exactly the right notes all the time to avoid driving voters to the incumbent. Ronald Reagan might have been able to pull it off, but he’s sui generis and long gone.
And if, on November 6, 2012, the unemployment rate is near 5 percent, gasoline prices are below $3.75 a gallon, and there hasn’t been a major terrorist attack, there probably will be no such thing as a good matchup against Obama. Only twice in 75 years has an incumbent president been beaten. In both cases the challenger was an exceptional politician who was running against an economic downturn (and, in Reagan’s case, a foreign-policy nightmare).
So is there anything that can be said at this point about the next presidential election. Based on a long view of things, here are two predictions.
First, the Republicans won’t nominate Sarah Palin. She has become, either by choice or happenstance, a politician without a program who is no more than a spokesperson for a set of resentments. The public has turned against her. She could still rehabilitate herself as a serious politician, interested in answers rather than complaints, but that’s a project for 2016 or beyond.
Nor will Donald Trump be the Republican nominee. He’s riding a wave right now because he’s blunt and outspoken and not afraid to make an ass of himself. That can get you elected to Congress, and if the stars are aligned right, governor (see Ventura, Jesse). But in a presidential campaign it’s political fool’s gold and won’t stand up to scrutiny. Americans may flirt with a candidate like that, but they won’t marry him.
Friday, April 22, 2011
For most of the 1980s, I was an editor at a daily newspaper, and the management spent a lot of time trying to figure out why our circulation was stagnant or declining. The one answer that never would have occurred to any of us was that people didn’t want to pay for the newspaper.
As it was, we were practically giving it away. Monthly subscriptions at the beginning of that decade were less than $4, which came to about 15 cents a day. For years the industry rule of thumb was that revenue from subscriptions and single-copy sales should cover the cost of the paper and ink needed to produce the paper, and advertising would cover the labor and overhead.
With the surge in commodities prices in the late 1970s, sales were no longer covering the paper and ink, but we were cautious about raising prices. To the extent it was feasible to do so, we wanted to keep the newspaper affordable to just about everyone.
Along came the Internet, which did a “shock and awe” number on the entire industry. One of the things it proved was that the problem of declining newspaper readership had little to do with the content newspapers were offering, which is what we obsessed about in the 1980s. More people than ever before are reading newspapers today, but they’re doing it online, where they don’t have to shell out even small change for the privilege.
The website for our local county-seat daily generates by far the most hits of any website based in the county. Yesterday’s paper had a full-page promotional ad trumpeting the fact that the site generates 395,000 unique hits per month. Paid subscriptions to the dead-tree version are just under 23,000.
Paradoxically, those 23,000 print readers are worth more to local advertisers than all those web readers, many of whom had found one story while Googling and are in no position to buy anything at a local store. The person who figures out how to make serious money from a newspaper’s web site will be the next Mark Zuckerberg or Steve Jobs.
Recently The New York Times announced that it would begin charging for use of its news website. After a certain number of visits, a user would have to sign up for pay-per-view. From any rational perspective, this is a sane and sensible move. After all, Safeway doesn’t give away milk, bacon and eggs. The Times’ news coverage and commentary is, by any standard, a superior product of considerable value. It takes a large number of talented and highly paid people to produce. Of course we should pay something if we want it.
Almost nobody else sees the issue that way. Perhaps because newspapers have been giving away online news for so long, people have become obstinate about getting it free. We live in the so-called information age, yet good information has almost no cash value. How did it come to pass that the most valuable commodity of all is expected to be free?
I worry about the long-term consequences, because one thing the Internet hasn’t changed is the reality that it takes time and money to put together good information on complex issues. If that cost can’t be recovered, news operations have no choice but to produce an inferior product for less. Already, I read about and see for myself web sites that pay writers a pittance to generate a high volume of stories or rely on unskilled volunteer or low-paid help to fill their space. Do we really want our knowledge of the world to be provided by amateurs? You get what you pay for.
Tuesday, April 19, 2011
Arthur Rubinstein, the great concert pianist, was once asked by a reporter how often he practiced. Every day, he replied. Somewhat taken aback, the reporter asked what would happen if he didn’t.
“Well,” Rubinstein said, “if I don’t practice for one day, I can see the difference. If I don’t practice for two days, the other musicians can see the difference. And if I don’t practice for three days, everybody can see the difference.”
In a weird sort of way, that’s a metaphor for what happens when an organization, be it a business or a government or a nonprofit, starts getting into a cycle of budget cuts.
Round One of the cuts can more often than not be made without doing too much harm. Employees can work harder, at least for a while; some compensating efficiencies can be found; and some cuts in product or service can be made that aren’t terribly obvious to most people.
If a second round of cuts is called for, they’ll typically show through the smoke and mirrors. Unless those cuts are carefully and shrewdly made by rethinking the organization’s scope and mission, as well as its practices and procedures, all the regular customers will notice them, and if it’s a business, so will the competitors. A third round, if it comes, is usually the beginning of an inexorable slide down the precipice.
Last week one of my Facebook friends posted a call to join with progressive and like-minded people in opposing all cuts to Medicare and Medicaid. I declined to respond because it seemed to be the wrong approach to the issue. The real question is how can those programs be made more economical and efficient in order to save them in something like their present form.
That requires changing practices and procedures, which is kind of like reforming the tax code. It means grueling battle with those who benefit from the status quo and often settling for less than the optimal result.
How could medical programs be cut without gutting their mission and leaving the beneficiaries unserved or underserved?
A simple but obvious change would be to rewrite the Medicare prescription drug law passed in the Bush administration, which prohibits Medicare from using its bargaining power to negotiate drug prices. Preventing Medicare from doing what the Veterans Administration and Wal-Mart do, to the benefit of their customers, is corporate welfare, pure and simple. A generation ago, Republicans would have been leading the charge against this sort of waste. In today’s House of Representatives, you couldn’t get it repealed.
We could also look at how other countries with universal health care seem to get the same or better overall health outcomes as we do at substantially less cost. That goes against the grain of American Exceptionalism, but if there’s one thing I learned in the business world, it’s that you should never hesitate to steal a good idea (unless, of course, it’s patented or copyrighted).
Other countries’ medical establishments seem to prescribe fewer prescription drugs and be slower to go to surgery. Looking at those questions, as well as looking at regional differences in treatment within our own country would surely generate some savings.
The wrong way of going about cutting the programs is the Paul Ryan approach — converting Medicare to a defined-benefit program run by private insurance companies. In addition to at least doubling the cost of administrative overhead, it would result in benefits being choked off to the sickest and neediest, creating, in effect, a wide range of private-sector death panels. I think everybody would see the difference.
Friday, April 15, 2011
We’ve all heard the old chestnut that if your neighbor is unemployed, we’re in a recession; if you’re unemployed, we’re in a depression. Trite as that is, it drives home the point that no matter what the economists say, economics is a personal matter, and we filter the big picture through the lens of our own experience.
I was reminded of that earlier this week when I went up to Santa Clara University to interview Alexander J. Field, a professor of economics who has written a somewhat revisionist book, A Great Leap Forward: 1930s Depression and U.S. Economic Growth. (It’s published by Yale University Press and available at Amazon.com.)
Field’s central argument, and it’s pretty convincing, is that the Depression years were paradoxically a time of great economic suffering and great technological and organizational innovation. He contends, in fact, that the 1930s were the most technologically progressive decade of the twentieth century because the advances occurred in so many areas.
Consider just a few: The invention of television, which made its debut at the 1939-40 World’s Fair in New York; dramatic gains in aviation; the development of a national infrastructure, including roads, bridges and tunnels (we haven’t built a large suspension bridge in this country since Verrazano Narrows in 1964). Within that decade, movies went from being black and white, stagey productions with scratchy sound to lavish, Technicolor productions like Gone With the Wind and The Wizard of Oz. Automobiles went from being primitive, boxy rattletraps to streamlined machines with radios, heaters, V-8 engines and automatic transmissions.
Despite the common impression that no one was spending money back then, there was a strong market for the new technology. In 1929, only 3 percent of American households had a refrigerator; by 1941, 44 percent did. When nylon was created in the late 1930s, 63 million pairs of stockings were sold in the first year.
A late friend of mine, who grew up in the Depression years, told me on more than one occasion that if you had a job in those times (and in the worst of them, three quarters of the people did) you were all right because everything was cheap. Field says the economic data support that, and during the 1930s real wages went up for most people who had jobs.
Plenty of businesses were doing well. Newspapers and magazines were thriving; the number of scientists employed in corporate R&D quadrupled between 1929 and 1941; radio created lots of good jobs and wealth in the upper echelons. Where you were mattered, too. Hollywood was a boomtown for most of the decade, and so were the towns near major construction projects.
In 1938, when Shasta Dam was being built, the nearby county seat of Redding had 8,000 people and three daily newspapers. The John P. Scripps newspaper group, for which I later worked, moved in to start a fourth, The Record. It was profitable within months and is the newspaper that survives in that area today. That illustrates another point: Depression or no, people were starting businesses when they saw an opportunity.
Sometimes prosperity struck unexpectedly. Leonard Bernstein’s father invested heavily in a beauty-supply business just before the 1929 stock market crash. He figured he was ruined, but it turned out that no matter how tough times were, women set aside a bit of money to get their hair done. By the end of the 1930s, he was a wealthy man.
Economic misery was the big story of the Great Depression, but, as Field reminds us, the big story isn’t the whole story. It makes you wonder what revisionist historians will be writing about today’s economy 75 years from now.
Tuesday, April 12, 2011
One of the more curious aspects of the debate over government finances these days is the often-made claim that governments ought to manage their budgets the way families do. It’s curious because most of the people saying that are themselves members of families, but seem to be clueless about how real families operate in the real world.
Consider a hypothetical family we’ll call the Ryans. Joe has a full-time job with good wages and health benefits. Jane works half-time as a medical receptionist, bringing in enough money to cover the rest of the household budget.
They live from paycheck to paycheck, making ends meet. There’s some fat in the budget — piano lessons for the daughter, a camping trip in the mountains every summer, gym memberships, premium cable — but the overwhelming part of it goes for food, housing, fuel, utilities and clothing.
Sooner or later, a lot of stuff happens all at once. Joe learns that his contribution to the health insurance premium went up $200 a month. The roof on the house needs to be replaced, adding $250 a month to the monthly payment on the home equity line. Gasoline prices are up sharply, adding another $100 a month. Both cars break down and need a major repair, which means another $300 a month on the credit card bill, in order to pay it off in a year.
What to do? Most families, I suspect, would take one of two approaches — perhaps a combination of the two.
The first approach would be to squeeze and fake, sort of what Governor Schwarzenegger did in California the past several years. Under this scenario, the Ryans would make some cuts they could live with (eliminate the vacation this year, drive a little less, drop the gym membership, but don’t, for God’s sake, cut the premium cable — that’s television), draw on savings, if there are any, and make the minimum payment on the credit card instead of paying down the auto repairs. They’d go on like this for a while and see how it went.
On the other hand, they might try approaching the problem from the revenue angle. Joe could put in for more overtime, apply for a promotion, or take a part-time job on weekends. Jane could try to get more hours at her job or find a full-time job. Most families I know would make whatever effort they could in this direction.
No family I’ve ever encountered would deal with this situation by having Jane quit her job, then cutting the food budget to nothing but rice and beans, and lighting the house with candles at night. Nor would Joe decline a scheduled pay raise on the grounds that the money belongs to his company’s shareholders, who would spend it more wisely than he.
Yet that’s pretty much what Republicans in Washington and Sacramento are doing by refusing to consider tax revenue as part of the budget solution. Congressional Republicans not only refused to allow the Bush tax cuts to expire, but have proposed cutting the top tax rate another 10 percent, claiming the difference can be made up by simplifying the tax code and eliminating loopholes. Every one of those loopholes, of course, has a devoted constituency that will fight, red in tooth and claw, to preserve it, so it’s unclear how well this approach will work.
If our crisis right now is the deficit — spending more than we’re taking in — then we should by all means cut what we can and take a hard look at the major expenses. But we should also bring in every cent of revenue we can and use it to make ends meet and pay down the debt. That’s elementary, and every family knows it.
Friday, April 8, 2011
Is it possible that one of the major problems with our system of government could be that we don’t have enough politicians? The idea is counterintuitive and liable to induce brain hemorrhage in some quarters, but it’s certainly in line with the original intent of the founders.
Article I, Section 2 of the Constitution states that the number of members of Congress shall not exceed one for every thirty thousand people. In the debate over the Constitution, it was presumed that the number might be bumped upward as America’s population grew.
Tench Coxe III of Philadelphia, who later went on to serve in the Washington, Jefferson and Madison administrations, wrote in 1787, “When the encreasing population of the country shall render the body too large at the rate of one member for every thirty thousand persons, they will be returned at the greater rate of one for every forty or fifty thousand.”
Well, it’s actually a far greater rate than that now. More than a century ago, Congress capped the number of representatives at 435, which means that following the 2010 Census, each congress member will be representing nearly 710,000 people — roughly the population of Charlotte, N.C.
Coxe, who was a Federalist and a partisan for the new Constitution, championed the rule of thirty thousand by arguing that if every congress member had to represent at least that many people, there would be too many voters in the mix to allow a handful of powerful men to dictate who would serve.
Neither Coxe nor any of the other Founders foresaw (and this is one of the big problems with the notion of original intent) how populous the country would become and the impact so large a population would have on congressional elections. With congressional districts of 700,000, a shoe-leather campaign is an impossibility. The only way to reach that many people is through intensive and costly media campaigns, ranging from direct mail to television.
The high cost of those campaigns vastly increases the influence of the political parties and wealthy contributors (corporate or individual) who want some sort of break from the government. In a cruel irony, too many voters create the same problem as too few voters — undue influence by moneyed interests.
Suppose, though, that we went back to the original intent of having members of congress represent a more manageable number of constituents — say Coxe’s outer limit of fifty thousand. A lot of good things could come of it.
To begin with, it would be a lot tougher for even the flushest of contributors to give to about 6,175 congressional campaigns, the number under this formula. Let’s say a company had $5 million in its budget for buying — excuse me, contributing to the campaigns of — congress members. It could spend $11,500 on each one now, which is certainly enough to have some influence.
With one congress member per 50,000 people the same company could spend only $810 per candidate, a far less influential sum. And in districts of that size, candidates could do more personal campaigning and would be known to many of their constituents, who would be likely to run into them at the ice cream parlor or Chinese restaurant. It’s harder to slime a candidate or office holder in the media when that person is well known and has some standing in the community. Furthemore, that standing allows the office holder some leeway to make an unpopular decision or two.
All this is sheer fantasy of course. It would have to be approved by Congress, and most people, politicians included, don’t voluntarily reduce their status or power. But the principle of bringing government closer to the people is not a bad one, even if it takes more politicians to do it.
Tuesday, April 5, 2011
In September 2003 Bill Clinton came to Monterey to do a program for the Panetta Institute. At the question-and-answer session afterward, he was asked, if you could have one thing in your presidency back to do over again, what would it be? His answer was instantaneous, and probably not what you expected.
Rwanda, he said. He was still haunted, nearly ten years later, by what had happened, by his failure to grasp the significance of it in time, and by his hindsight belief that a reasonable application of American force could have prevented at least the greater part of the genocide.
I was there when he said it, and have been flashing back on it a lot over the past couple of weeks as I follow the public discourse over President Obama’s handling of the situation in the Middle East, particularly Libya. Almost no one making noise — regardless of political persuasion — likes what he’s doing, but what Richard Nixon used to refer to as the Silent Majority seems to be largely supportive.
This may be one instance where the majority has it right. Most Americans know next to nothing about the Middle East, so the Obama approach, which could be characterized as cautious right-mindedness, probably strikes them as being a sensible way of dealing with an intractable problem. After all, the only alternatives are boldness or inaction. Boldness, when you’re flying blindfolded through a snowstorm, is not a good idea. Doing nothing can put you in the position of Clinton with Rwanda.
It’s probably not a bad idea to give a president the benefit of the doubt in the case of an unexpected crisis with a lot of unknowns. I believe George W. Bush was a bad president and fully expect history to vindicate me, yet I’ll cut him a break on a couple of things.
For instance, from the available evidence, it’s pretty clear that his administration took a dilatory attitude toward terrorism before 9/11 and didn’t act with urgency in the weeks before when there was a lot of chatter about something big about to happen. That said, it’s not at all clear that a more aggressive approach would have stopped the atrocity. It would have taken the ratiocination of Sherlock Holmes to guess at the actual plot, and mounting a response based on deductive inferences would have been a hard sell.
President Obama has had more stuff come at him out of nowhere than any president since Harry Truman, and it will probably be years before we can ascertain with any certainty how well he handled it. We live, however, in a society that wants its news and commentary right now, as things happen. Most of the time that simply leads to bad reporting and even worse opinion.
Rwanda was a fairly simple issue compared to what’s happening in the Middle East now. Clinton was an undeniably smart guy with good impulses. And he messed up. Every president does, on more than one thing. It’s the nature of the job. At least Clinton recognized it and admitted it.
If Barack Obama lives as long as he should, he’ll be able to look back on his presidency and see its successes and failures with some clarity. He’ll have regrets, to be sure, but will they be because he acted too boldly or too slowly? We certainly don’t know now, and perhaps the question we should be asking is the one Napoleon asked about his officer candidates: “But is he lucky?”
Friday, April 1, 2011
It occurred to me the other day that it has been a very long time since I’ve seen a news article about a libel suit filed in the United States. Could it be that libel, like privacy, is one of those relics of the twentieth century that has become irrelevant?
The internet has become the greatest destroyer of privacy in history. Google Earth can put your back yard, with its weeds and rusting vehicles, in front of billions. And if Facebook is any indication, many of us have become eager and willing assassins of our own privacy.
(Along those lines, a corporate executive I know says she routinely checks the Facebook pages of job applicants and is both astonished and appalled by what she sees there. Suffice it to say, half the people who apply for a job with her company effectively Facebook themselves out of the running with their spring break photos and drunkalogues.)
Still, you’d think libel would survive because it’s an injury inflicted by someone else’s publication or broadcast of a defamatory falsehood. In my newspaper days, there were hard and fast legal rules covering libel, and we all knew them well.
In California, if you published a story and the object of it considered himself libeled, he had several days to serve the publication with a demand for correction, clearly stating the error.
If the publication corrected the error in a timely fashion, the aggrieved party could sue only for actual damages. If the paper falsely reported that someone had been arrested for drunk driving, causing him to lose his job, the paper was on the hook for his lost wages and legal fees, but not for any punitive damages if a prompt correction was made. The correction was considered evidence per se of absence of malice.
An interesting sidelight: If the error was due to faulty information on an official document, such as a booking sheet at the jail, the publication was off the hook altogether, as the official document was considered privileged, like sworn testimony in court, and could be reported without regard to its actual truth. A few years before I got there, the paper I worked for made case law on that point.
Libel suits worried news organizations for the same reason any lawsuit does. Anybody can file one, and there’s no telling what a jury will do if it gets that far. My paper was once sued by a developer for $121 million for reporting on a community group’s criticism of the development. A judge dismissed the lawsuit after several months, but they were anxious months.
The threat of such suits, however remote, did keep us on our toes. Any time there was a controversial story affecting someone’s reputation, the editor insisted on doing the final edit, fine-tuning it for precision, accuracy and nuance. If nothing else, the possibility of a libel suit taught reporters one of the most important lessons in journalism: You are writing about real people with real feelings and real reputations, and you should never forget that.
Maybe there’s so much bad stuff out there about so many people that it just doesn’t matter any more. Maybe the fragmentation of the media has become so complete that no one thinks it matters what a newspaper or TV network says. If you think you’ve been libeled by Fox News, you’d have to figure it was seen by only a couple of million people, most of whom probably believe Barack Obama has no valid birth certificate. Sue the bastards? What’s the point?