The Founding Fathers Approved Individual Mandates

I am a bit late in discovering this, but evidently Harvard Law professor Einer Elhauge has unearthed a number of cases from the 1790s in which Congress — along with founding fathers in the other branches — passed measures requiring Americans to engage in certain types of commercial activity. Needless to say, the gravity of this sort of evidence, vis-a-vis the Supreme Court’s pending ruling on the health care law, rests on the fact that the court’s conservative-orginalist wing has been sticking to the argument that a government-enforced individual mandate — the kind present in the health care law — is unsupported by the Constitution and without precedent. To show with concrete evidence that the founders did indeed back individual mandates on a number of occasions would seriously hurt that argument.

To Professor Elhauge we go:

The founding fathers, it turns out, passed several mandates of their own. In 1790, the very first Congress—which incidentally included 20 framers—passed a law that included a mandate: namely, a requirement that ship owners buy medical insurance for their seamen. This law was then signed by another framer: President George Washington. That’s right, the father of our country had no difficulty imposing a health insurance mandate.

That’s not all. In 1792, a Congress with 17 framers passed another statute that required all able-bodied men to buy firearms. Yes, we used to have not only a right to bear arms, but a federal duty to buy them. Four framers voted against this bill, but the others did not, and it was also signed by Washington. Some tried to repeal this gun purchase mandate on the grounds it was too onerous, but only one framer voted to repeal it.

Six years later, in 1798, Congress addressed the problem that the employer mandate to buy medical insurance for seamen covered drugs and physician services but not hospital stays. And you know what this Congress, with five framers serving in it, did? It enacted a federal law requiring the seamen to buy hospital insurance for themselves. That’s right, Congress enacted an individual mandate requiring the purchase of health insurance. And this act was signed by another founder, President John Adams.

I would guess that a capable lawyer would be able to point out why these three cases don’t amount to quite the smoking gun that they appear to be; the government’s admiralty powers might have covered the first and third actions, while the war powers might have covered the second — meaning the government could have imposed all three of these measures without even needing to give a Commerce Clause-type justification. Still, Professor Elhauge’s research exposes the ridiculous notion — often cheered in conservative circles — that the founding fathers shared a consensus viewpoint on every constitutional issue of their time and that 21st century judges presiding over cases involving the complex issues of a modern nation-state can somehow objectively know that viewpoint and apply it to workable legal rationales.

For practical purposes, it doesn’t even seem worthwhile to get into an argument over whether the government had legal grounds to take any of these actions in the 1790s. Should the government have required all “able-bodied men to buy firearms” back in 1792? I don’t really know. I can imagine Scalia and Thomas devising some explanation as to why that was not an individual mandate on a par with the one present in the health care law. And on an administrative level, it seems that an altogether easier solution for the 1792 government might have been to levy a tax and use the revenues to provide local militias with weapons.

All of that, however, seems to miss the more important point that we should not be arguing at such length about the legal propriety of things done in the 1790s, and the only reason that Prof. Elhauge and many of the rest of us feel drawn to produce and wade through this kind of historical evidence is that the pull of originalism has become so strong in our national discourse. The general thought is that if the founders wouldn’t have done x, then there needs to be some highly compelling reason to support the enforcement of x. That mentality — which might barely make sense in an isolated, hyperlegal context such as the Supreme Court — has unfortunately spilled over into the lawmaking process and created an unnecessary regressive hurdle to passing good legislation.

Which brings us back to the health care law. Anyone can spot the similarity between government forcing sailors to buy hospital insurance  in 1792 and government forcing citizens to buy health insurance in 2014. The problem is not whether the analogy is legally valid; the problem is that that kind of analogy — which bridges 200 years of history and glosses over immense changes in our society — should not be the framework we use to assess the validity of our laws. Professor Elhauge’s research can be used to beat originalists at their own game, but perhaps more importantly it is an example of how originalism engenders a preposterous and unproductive manner of resolving legal questions nowadays. We shouldn’t need to reach into the trough of 18th-century history to fend off a claim that administering health insurance to 21st century Americans is a violation of rights.

College Students Aren’t Reliable Borrowers

Perry Stein at The New Republic discusses proposed higher education cuts in Paul Ryan’s 2013 federal budget. The Ryan budget, which is still under review, makes two key proposals: to increase the interest rates on loans for low-income students, and to cut funding for Pell Grants — “the government’s flagship program in helping low-income students gain access to higher education,” according to Stein. The first proposal is apparently dead on the vine, in part because of an effective Twitter campaign launched by the Obama administration, and in part, one must imagine, because of generalized anxiety over burgeoning levels of student debt. Stein’s argument is essentially that if the proposed cuts to Pell Grants were to go through, the successful campaign to keep interest rates flat might not end up amounting to much of a victory:

Under Ryan’s plan, the maximum Pell Grant award would remain the same as it is now at $5,550 per year, but the eligibility requirements would change so that fewer people would qualify. (He didn’t specify the income limit for eligibility.) And where Obama has increased the maximum Pell grant during his presidency (and proposes to have it rise with the Consumer Price Index for 2013), Ryan calls for the award’s amount to remain stagnant. In other words, under Ryan’s plan, Pell Grants would not keep up with the pace of inflation, and would be worth less in each successive year. Given that college prices will likely continue to rise, this means that needy students will become ever more reliant on loans to pay for their education.

What seems insidious about the Ryan proposal, though not altogether atypical of Paul Ryan, is the strategy of reducing eligibility rather than reducing award levels. One could easily imagine a money-saving scheme wherein awards were capped across the board at a smaller dollar amount but the eligibility requirements were left untouched. But Ryan’s approach would categorically disqualify many prospective college students from the possibility of getting any Pell money from the government (and thereby hurt their prospects of going to college), while leaving others in virtually identical economic circumstances to collect the full $5,500 per year. The distinction would seem to hinge on an as yet undisclosed cutoff in family income, and could turn poor families’ prospects of sending their kids to college into a perverse gambit of proving that their income falls below the Ryan threshold.

It’s not entirely clear to me why Paul Ryan tends to prefer eligibility-reducing schemes to outlay-reducing schemes. The same preference was behind his proposal to increase the retirement age for social security, where very much the same counter-argument could be made that reducing the across-the-board payout maximum would have been just as effective. Part of this tendency, I have to think, stems from a broader conservative belief that reducing eligibility for a social benefit is better than reducing the benefit itself.

A secondary issue, which is not Paul Ryan’s fault, is that the federal government no longer views the business of loaning money to college students as anything close to safe. That apprehension stems in part from the deflating economic value of a college degree; because of the growing density of college graduates in our society, it is getting harder and harder for new grads to achieve an earning level at which they can pay off their debt, which has the effect of leaving lenders (often the federal government) on the hook for longer. This is less of a problem, of course, for students who go into rapidly expanding fields such as engineering and information technology, but on the whole the glut of jobless or underemployed humanities majors is a factor affecting the perceived creditworthiness of college students. But it seems unlikely that any serious legislator would ever propose that government should dispense grants to students on the condition that they pursue a “low-risk” field of study. That would be a safer business model for the government, but it could never gain traction politically and might even increase the barriers for low-income students.

Something has to be done about the student debt problem, but the solution is not just to cut a huge number of people out of federal grant eligibility while maintaining current award levels for everyone else. Lending still has to be the core mechanism for financing higher education, and the key to a solvent lending operation is having reliable borrowers who pay back their debts. So, some constraints clearly need to be applied to federal lending to curb risk, but arbitrarily reducing grant eligibility just seems like another slap at poor people that accomplishes very little.

When You’re Strange

I had a chance recently to see Tom DiCillo’s 2010 documentary on The Doors, When You’re Strange, and while I’m sure there are already numerous reviews out there on the web, I thought I would throw a few of my own thoughts into the ring.

To get my one criticism out of the way first: while I did not live through the period in question and did not experience The Doors firsthand, the film seems to collate a lot of imagery that I am not sure naturally fits together, and that might not strike a person who did live through the era as a truthful representation of how things were. DiCillo goes to some length to situate the band’s rise and fall against the backdrop of the tumultuous late 1960s, and to what extent there is a meaningful connection between the assassinations of Dr. King and Robert Kennedy, the Vietnam War, and the youth movement, on one hand, and the persona of Jim Morrison and the music of the Doors, on the other, I know not. A basic knowledge of history would suggest that if all those phenomena were somehow critically connected, then so too must have been other phenomena that are barely touched upon in the film: the Civil Rights movement, the Feminist movement, and the politics of the Cold War, to name a few. There is obviously a need when telling a biographical story to place your subjects in a rich historical context, and needless to say The Doors’ era was a particularly eventful one, but a talented storyteller should be able to sort out which connections truly matter to the essence of a subject and which connections are coincidental and/or inconsequential. In telling a story about my own childhood in the early 1990s I could certainly throw in footage of the Gulf War, the ’92 election, and Nirvana, but I am not sure that any of that would help to truthfully portray what my childhood was like.

With that out of the way, I will say that I found this film to be an unusually immersive and edgy viewing experience, especially for a documentary. Some of that has to do with DiCillo’s decision not to use talking head interviews — a set-piece that is common to documentary filmmaking but that often has the effect of creating distance between the viewer and the subject and making the film more an exercise in analysis than experience. DiCillo also employs extensive live concert and recording footage and, from what I am told, rare footage that has never been shown to audiences, including scenes from a film that Morrison himself directed. Having your head held down in this alien-seeming era for 90 minutes straight is a sometimes jarring, sometimes blissful, and altogether unusual experience, for which DiCillo deserves a lot of credit. One could easily imagine this film proceeding in a more conventional way, with interludes from nostalgic seventy-somethings reminiscing about the good old days and offering their insights into who Jim Morrison really was, but there is something a bit more tactile about just being in the moment with Jim and the band, even if deeper insights are not as thoroughly entertained.

On that note it’s worth pointing out that, perhaps for good reason, this film is about Jim Morrison first and The Doors second. If DiCillo proposes any thesis, it’s that Morrison was a dangerous, rebellious, charismatic, ultimately self-destructive, but thoroughly brilliant figure who both affected and embodied 1960s culture by keeping himself on the edge — through a combination of drugs and natural talent — and then pushing himself over. That may not be much of a novel assessment for people who are familiar with the band and/or lived through the era, but there can be little doubt that Morrison is a dynamic enough subject to compensate for conventional views on who he was.

Given DiCillo’s loose leash on live performance footage — including a brilliant scene in which narrator Johnny Depp breaks down the musical qualities and tendencies of each of the band members over a live performance of the song, “People Are Strange” — this film will no doubt appeal more to fans who simply love the music than first-timers who are interested in learning something new. But even for those who are familiar with the band’s songs but less so with the band itself, the film is worth watching for the depiction of Morrison, who is by turns captivating, revolting, and heartbreaking.

Whether one buys into or rejects the mystique of Morrison, it would be hard to quibble with the assessment by Morrison’s estranged father that his son had “unique genius which he expressed without compromise.” Capitalizing on that sentiment more than any cultural zeitgeist from the era, perhaps, we owe some thanks to DiCillo for putting us in the company of a genius for an hour or so.

Things American Consumers Have to Buy

Kevin Drum takes another stab at resurrecting the argument that the individual mandate ought to be constitutional because there are plenty of other things the federal government requires us to buy:

When I bought my last car, for example, I was forced by federal law to also buy seat belts and air bags — and as far as I know, no court has ever suggested the federal government lacks this power. Why?

Technically, of course, the government isn’t forcing me to buy these things. I could, if I wanted, forego the purchase of a car. This isn’t very practical where I live, serviced as I am by a single bus line that comes by once an hour, but I could do it. I could also move someplace with better transit. I’m not absolutely mandated to own seat belts and airbags.

But in real life, the fact is that most of us need a car. It’s only an option in the most hyperlegalistic sense, which means that for all practical purposes the federal government has mandated that I buy seat belts and airbags. And they’ve done that on the theory that even if I don’t care about my own safety, other people might ride in my car and they deserve protection. What’s more, taxpayers could end up on the hook for medical care if I injure myself and my passengers. So seat belts and airbags are the law.

That strikes me as empirically correct, although it’s worth pointing out that since the Supreme Court is a court, hyperlegalistic arguments are never out of the question. One could ask whether a Supreme Court justice’s thought process ought to involve a certain degree of practicality (i.e. concern for the everyday realities of the average American citizen likely to be affected by a ruling), but that seems like shaky footing to me. Once you start imploring judges to be “practical,” or to empathize with real-life issues, you jeopardize legal objectivity — the sacred foundation of our judicial system.

Whatever the practical considerations, I don’t see any reason why a judge couldn’t draw a distinction between the purchase of seatbelts and the purchase of health insurance on the grounds that the former is a conditional requirement (you only have to buy this if you buy a car) while the latter is an absolute requirement (you have to buy this, period). That’s the kind of hyperlegalism Drum seems to be talking about, but I can’t imagine any of the justices having any qualms making that kind of argument. Indeed, the oral arguments from a few weeks ago suggested that a few of the justices are already thinking in those terms.

Of course one could then ask why, if the federal government can’t enforce absolute requirements in the marketplace, can it even enforce conditional requirements? That is, why can’t we buy cars free of seatbelts and airbags? That question might lead down a long path of legal precedent and the history of the regulatory state, but I don’t think that has anything to do with the absolute requirement that is the individual mandate. The fact remains that no one has to buy seatbelts because no one has to buy a car, but the same would not be true of health insurance if Obamacare were upheld.

Hyperlegalism can be a bane to those of us who live in the real world, but the reasons for sealing off Supreme Court judges from practicality are numerous and well documented.

Evaluating the “Creative Monopoly”

David Brooks takes a look at Peter Thiel’s career path and concludes that instead of becoming better competitors, we need to become better creative monopolists:

In fact, Thiel argues, we often shouldn’t seek to be really good competitors. We should seek to be really good monopolists. Instead of being slightly better than everybody else in a crowded and established field, it’s often more valuable to create a new market and totally dominate it. The profit margins are much bigger, and the value to society is often bigger, too…

You know somebody has been sucked into the competitive myopia when they start using sports or war metaphors. Sports and war are competitive enterprises. If somebody hits three home runs against you in the top of the inning, your job is to go hit four home runs in the bottom of the inning.

But business, politics, intellectual life and most other realms are not like that. In most realms, if somebody hits three home runs against you in one inning, you have the option of picking up your equipment and inventing a different game. You don’t have to compete; you can invent.

The notion that our competitive instincts overpower our creative instincts might be true on a biological or quasi-sociological level. But, I don’t think it’s reasonable from an economic standpoint to say that progress is built on brilliant one-off ideas in novel fields rather than on incremental improvement in established fields. Certainly, great ideas contribute to progress, as in the case of the airplane, the printing press, and the wheel, but it’s awkward to pin a theory of economic progress to the principle that as soon as you find you’re not the best at something, jump ship immediately and try to develop a great idea that’s totally new. That might be a useful framework for thinking about Peter Thiel’s career in retrospect, but it doesn’t provide much of a prescription for others who are, whether by choice or fate, in the competitive arena.

It’s also worth pointing out that while the rate of invention in our society has sped up remarkably over the past few decades, the value of invention has declined rather severely. Almost every day it seems we are talking about a new app or web service or device that has just popped up on the map, but I think David Brooks would agree that very little of this “innovation” qualifies as “picking up your equipment and inventing a different game.” Even if you think about the hallmark inventions of the past decade — Facebook, the iPod, Twitter, etc. — these are not really new ideas; at best, they are technological derivations of old products in established fields. The internet was a radically new idea, as was the first computer, but a lot of the inventions we have seen in the technology field since then just aren’t new in the same way — in the same way that the invention of the modern airplane engine doesn’t measure up to what the Wright Brothers accomplished at the turn of the century.

That doesn’t mean there hasn’t been any brilliance in the last decade of inventing, but it’s important to maintain an appropriate historical perspective on these kinds of things. We are in a post-Internet, post-computer age of high-volume, low-creativity inventing, and it is questionable how long this phase will last and what kind of shelf life the major inventions of this era will have. The airplane has been around for 110 years. The wheel has been around for some 6,000 years. Will Facebook last as long as either? The fact that new technology has made it easier to bring old, analog products and services into the digital domain is a great advancement in itself, but to say that PayPal, which is little more than a digital extension of the credit card, is “something so creative that you establish a distinct market, niche and identity” seems a little heavy handed to me. No one can doubt that PayPal and many of its digital-age peers have turned out to be useful, profitable ideas, but as Brooks himself points out, that’s entirely separate from the notion of a creative monopoly. Viewed in the proper context, Thiel seems much more like a tactical incrementalist — hitting home runs in the same game that people have been playing for a while now — than a great creative mind.

But the rich and successful alway get to write their own biographies — a nice reminder that beating out the competition still counts for something.

Our Hunger, Their Game

Friends of mine who have read The Hunger Games books tell me that it is a story about corrupt government, coercive economic planning, and the wonders of self-sufficiency. If that’s true, then it would appear that author Suzanne Collins has taken up the mantle of Ayn Rand in producing libertarian fiction for the masses. Indeed, certain Rand-ian elements show up in this first cinematic installment of The Hunger Games: inane government administrators, virtuous country folk, and the triumph of independence over coercion, to name a few.

Yet, unless Hollywood has engaged in some major narrative obfuscation (not out of the question), there is little to suggest that this series is primarily about social theory — libertarianism or otherwise — as was unmistakably the case in Rand’s work. Rather, the first Hunger Games film seems to be a story primarily about physically talented youths who get plucked out of their rural hometowns, polished up, brought before the social elite, and then fashioned into entertainers of a very particular sort — inflicting violence on one another to satisfy the latent bloodlust of society’s leisure class. If that story arc sounds familiar, it may be because you know something about the world of professional sports in America.

Certainly, there are other ideas at work in this film. We can’t hold back our sense of shame, for instance, over the way these kids, who seem naturally inclined towards teamwork and self-discipline, are brainwashed into a vicious winner-take-all mindset and then exploited to the point of their own demise. It would be hard not to see that as a critique of run-amok capitalism, and if so, would counter the suggestion that Suzanne Collins will be voting for Mitt Romney this November. But for whatever reason — familiarity, excitement, box-office sales — this film seems most clear-headed as an indictment of the ways in which the professional sports industry and class divisions mutually reinforce each other in our society.

For evidence, look no further than the film’s hero and moral center, Katniss Everdeen (Jennifer Lawrence) — a humble young woman who sacrifices for her family, sticks by her friends, controls her passions, and wields a crossbow better than most people do a fork. Katniss is at once the all-American girl and the athletic icon whom we have all come to adore; she is how we would all like our daughters and sisters to turn out. She also happens to be quite poor (a recurring flashback shows her friend, Peter Mellark, tossing her a loaf of bread as she sits helplessly in the rain), and her entry into the Hunger Games is less a matter of athletic accomplishment than lack of better alternatives. Still, we cannot help rooting for her to demonstrate her prowess, slay her competitors, and emerge the victor.

I certainly found myself rooting for her thusly, and as the film wore on I began to wonder how much of the suffering and angst she had to endure was also true of real-life sports figures I have come to admire. A typical sports broadcast shows top athletes in top form — fearsome, strong, elegant — but we typically do not see what they go through when they are off the field. Pain, anxiety, and discomfort must account for some of it, as it does for Katniss and her peers. The tribute ritual, whereby the players are selected for the games, seems like a thinly-veiled analogy for professional sports drafts, and the scenes in which Katniss and the other young competitors are trotted out on stage to conduct interviews with the moronic Hunger Games commentator (played by Stanley Tucci) could have been lifted right from ESPN. The physical skill of these kids, it seems, just isn’t enough for us — we want to control their fate, and then we want them to charm us.

In the film it’s clear why the kids play the Hunger Games — they have no other choice. In real life, by contrast, we are able to carry on the illusion that the men and women who dazzle us on television and in arenas across the country are doing so not out of coercion, but out of passion for the game. That may be true on the margin, but it’s undeniable that professional sports serve audiences and owners well above players. What has long been true is that gladiatorial entertainment unites us as a people — far more, in fact, than any one theory about political or social order that might be floating around in this film. We yearn to watch our fellow humans do battle on each other for sport, and the more gifted the battlers, the more enticing the spectacle. There is no language that we speak better than that of victory and defeat, no concept we grasp better than that of pain and triumph, and no pastime we cherish more universally than sport. After all of our bickering over politics and the well being of our country is done, what do we all sit down and do? Watch sports. And that’s where Katniss Everdeen comes in. To us she is an athlete, a role model, and a heroine; but the blunt reality, which we are reminded of in this film, is that she is little more than an entertainer, and one whose shelf life will probably prove quite fleeting.

That is not to say that professional sports are evil, but if The Hunger Games is clear on any one point, it is that the immortality of violent sport rests on the demand among those who do not play rather than on the passion of those who do. The character of Haymitch Abernathy (played by Woody Harrelson), Katniss’ grousing mentor and a former Hunger Games winner, provides a nice coda to this point. What becomes of our greatest victors? Invariably, they become drunken louts, or descend into obscurity, or some combination thereof. In no time, we fear, the great Katniss will walk in Haymitch’s shoes.

Government Budgeting vs. Household Budgeting

One of the more insidious of fiscal conservative talking points is the notion that the federal government should manage its budget as if it were a household. Without getting into the accounting minutiae, this sort of argument assumes that all federal expenditures are simple capital outlays and that the government does not invest in anything that enhances the well being of the country. A person might be philosophically disposed to think this way, and to argue either that the government shouldn’t spend a dime on anything or that it should invest only in certain sectors and industries. But to say something along the lines of “government shouldn’t add to its debt because you wouldn’t run your household that way” is junk.

Matt Yglesias paints what I think is a useful — and fairly alarming — analogy that exposes the error in this thinking:

Think not about a household, but a firm. For example a leverage buyout firm like Bain Capital that Mitt Romney used to run. When Bain was founded, it had no debt — it was brand new. But it right away set about to indebt itself. Indeed, piling on debt is integral to the whole private equity business model. First you go out and raise some equity capital from investors, and then you go to a bank and raise a bunch more money by borrowing it. If your firm grows and thrives, the nominal aggregate debt load will keep going up and up. That sounds insane until you remember that Bain isn’t just borrowing money to throw wild cocaine-fueled parties. It’s buying companies. On the one hand, you add a bunch of debt to the liabilities side of the balance sheet. On the other hand, you add a bunch of companies to the assets side of the balance sheet. And the business proposition is that through some combination of management expertise and financial engineering Bain causes the average value of its assets to exceed the average value of its liabilities.

I don’t think the problem is that fiscal conservatives fail to understand the way balance sheets or private equity firms or households work; I think it’s that they have a philosophical disagreement with the Obama administration over where and how much money to spend. That’s fair, and it would be great given that it is a presidential election year for the candidates to engage in a serious debate over spending priorities. But if the disagreement is purely about different spending philosophies, then fiscal conservatives should just come out and say that, rather than dallying around with a fallacious analogy that illuminates precisely nothing about the way that governments or households run their budgets.

I think it’s fair to say that Mitt Romney and President Obama do have this kind of philosophical disagreement, and it would be reasonable for Romney to argue that the government should only spend money on national defense and safety nets for the elderly, for instance, because those are the only two areas of real philosophical importance. But instead, a lot of the rhetoric from his camp has been about how government can’t keep on borrowing and spending this way because, well, just imagine if your family kept on borrowing and spending this way. It’s effective campaign gimmickry, but it’s not going to be very hard for President Obama or anyone else to simply point out that the federal government works differently from your parent’s house. And as Yglesias rightly says, “Romney should be almost uniquely well-situated” not to fall into this trap.

The Difficulty of Defining Political Centrism

A common refrain heard during presidential races is that candidates must pander to the party extreme during the primary season and then lurch back towards the center after winning the nomination. Mitt Romney, having all but wrapped up the nomination, has already begun this radical and potentially painful change of course.

To whatever extent this conceptual framework accurately describes candidate behavior in election seasons, it is important to realize that it also assumes, none too subtly, that centrist voters — however elusive and poorly defined — are the gatekeepers to the presidency. In a NYT Op-Ed this morning, Bill Keller presents a rough snapshot of the centrist platform:

The middle is not the home of bland, split-the-difference politics, or a cult that worships bipartisan process for its own sake. Swing voters have views; they are just not views that all come from any one party’s menu. Researchers at Third Way, a Clintonian think tank, have assembled a pretty plausible composite profile of these up-for-grabs voters…

Swing voters, I think, are looking not for a checklist of promises but for a type of leader — a problem-solver, a competent steward, someone who understands them and has a convincing optimism. We don’t know exactly how they identify that candidate, but it is some mix of past performance (especially for the incumbent), campaign messaging and chemistry.

Perhaps indirectly, Keller hits here on one of the most important — and confusing — aspects of defining swing voters when he says that their views are not “views that all come from any one party’s menu.” That commentary underlies what might be called an additive analysis of centrist political identification: centrists, failing to have a fully defined platform of their own, selectively borrow from the platform of each of the two major parties to inform their electoral behavior.

Thus you get voters who are, so to speak, on the fence. They have one foot on the left and the other foot on the right. They typically take much longer than party elites to make up their minds about which major-party candidate to support. Most importantly, they can be captured.

This is a familiar way of understanding centrists, but not the only way. In what might be called the subtractive analysis, centrists are thought to exist in a kind of ideological void between the two political parties. The thinking goes like this: if you pick out any two Democrats, there is a high probability that they will share many of the same political preferences. To some extent, this is a tautological certainty; if person x and person y weren’t both Democrats, they wouldn’t share the same views on issues a, b, and c, but since they do share the same views on those issues, and since those views form the basis for the Democratic party platform, they must both be Democrats.

As pundits and pollsters well know, no such uniformity can be imposed on centrists. If you pick out two voters who both identify as centrists, it is doubtful that they will have the same broad level of agreement as two Democrats or two Republicans will. They might agree about abortion, or gay rights, or taxes, or fiscal policy, or military spending, but it is unlikely that they will agree on all — or even many — of those issues (and if they did, they would most likely fail to be centrists). Thus, it is unclear in what sense two or more centrists can be said to “belong” to a party, let alone the same party, regardless of whether that party be termed “centrists,” “moderates,” “swing voters” or something else. And a party whose political existence is in question cannot have a platform, contrary to Keller’s suggestion.

What Keller gives us is a smattering of the political tastes found among voters in the inter-party void: some are fiscal conservatives, some are economic moderates, some are socioeconomically aspirational, some are defense-minded, and some are socially progressive. I have no doubts about the ability of polling to reveal this kind of information, but I have grave doubts about the implication that this is an exhaustive — or even an indicative — platform for some well-bounded group in the center of the political spectrum. Without a doubt, there are voters who resist calling themselves Democrats or Republicans, who faithfully report to the polls on election day, and who don’t know many (if any) people who share the same views as them on all the major issues. These voters find themselves forced into an intellectually chaotic void by the fixed ideologies found in each of the major parties. We can certainly lament the fact that being a social progressive/fiscal conservative does not translate into membership in either of the two major parties, but we most definitely cannot talk about how centrists are some third political group in between Democrats and Republicans.

What appeals to me about this analysis is that it explains why the oft-invoked act of “capturing the moderates” is so important in political races (particularly presidential races). If Moderate were a distinct political party, there would be no real hope among Democrats or Republicans of capturing Moderates, anymore than a Democratic candidate would hope to capture Republican voters. But since moderates can be understood as a sort of mass of differentiated plankton swimming around the pool between Republicans and Democrats, we can see why it is so vital for major-party candidates to scoop up as many of these voters in their nets as possible.

The temptation among political scientists to ascribe, in Moneyball-like fashion perhaps, a stable, itemized platform to Moderates will remain, just as the desire among major-party candidates to understand and sympathize with these voters will remain. But we must not let the conceptual tendency to group and define cloud our thinking about who and what moderates are. They exist because of the ongoing ideological rigidity of the two major parties, but they cannot be said to be a party of their own.

Ozzie Guillen’s Big Mistake

There can be no doubt that Ozzie Guillen, the recently suspended manager of the Miami Marlins, made a gaffe that will cause him and his team pain for some time to come. However, Guillen’s punishment — both his five-game suspension and his battering in the media — has very little to do with Fidel Castro, whatever his true feelings about the Cuban dictator might be. Jon Friedman of MarketWatch explains:

The Major League Baseball team, which had been known as the Florida Marlins until this season, wants desperately to maintain the loyalty of Miami’s large and influential Cuban-American population. It has a new team name, a new ball park and, it hopes, a new, Miami-centric image.

The team has had poor attendance records despite the fact that it won the World Series in 1997 and 2003. The management wants very much to reach out to the sports-loving population.

The team signed such star free agents as Jose Reyes before the start of the 2012 season as a way to convince the public that it’s serious about presenting an exciting, winning baseball squad. It hired Guillen, of Venezuelan descent, partly to reach out to the Spanish-speaking population.

But the death knell of a sports franchise’s prospects is having detrimental community relations. The team’s ownership had to show the community that it recognized that Guillen’s off-the-cuff comments in Time magazine had hurt many people.

By speaking so recklessly, Guillen hurt a large segment of Miami’s Cuban-American population, a good number of whom either fled Castro’s oppressive regime in Cuba or know well people who were killed during his five-decade reign or were forced to escape.

It’s important to understand that since neither the Miami Marlins nor the MLB are political entities, and since neither have knowable stances on Fidel Castro, this case is decidedly not about protecting some political viewpoint. Whatever Guillen actually thinks about Fidel Castro is just as irrelevant as what the Marlins’ management actually thinks about Fidel Castro; the only relevant point is what the Marlins’ fans think about Fidel Castro, and by now those feelings are pretty well documented.

However this scandal affects the Marlins’ season or Guillen’s career, it is important to grasp what is being punished here and why, and what that says about the morality of profit-driven sports businesses. There is nothing in the Miami Marlins’ bylaws that says that all persons associated with the organization must denounce Castro. Nor is there anything in the bylaws of any other professional sports franchise that requires employees to hew to any other political stance. Ozzie Guillen is not being suspended because of a political difference of opinion, nor even because he violated a stated team code. He is being suspended because he offended his employer’s client base, and as in any other business, that will likely have a negative impact on revenue, even after he returns from his suspension. Guillen didn’t violate the Castro clause, he violated the public-image clause — the most serious of sins in the professional sports business.

That said, I think MLB commissioner Bud Selig’s decision to weigh in on this situation is precarious to say the least. Individual clubs ought to decide what to punish their players for, and the league office ought to steer perfectly clear of reprimanding players and managers for embarrassing PR blunders. The next political sound byte might not be as flagrantly ridiculous as this one, and Bud Selig really can’t get into the habit of issuing statements on the propriety of each and every player’s political views, however poorly conceived or articulated. The MLB trying to put out fires on behalf of embroiled ball clubs is a dangerous and counterproductive exercise, not to mention bad for baseball’s image.