Sunday, November 7, 2010

Historical gambling (30 January 2007)

Tyler Cowen links to this WashPost article about historical gambling, a potential future offering at Virginia's horse racing tracks.
Colonial Downs, which offers betting on horse races at 10 sites across Virginia, is pushing for changes in state law so that it can offer a new form of gambling, called historical racing, on which people wager on horse races that have already taken place.
Like Tyler's, apparently, my first response to this was "Wow. That sounds pretty stupid." But that's because I'm thinking that the pleasure of gambling on horseraces lies in the number-crunching and theorizing that handicapping consists of and then the drama of seeing your theories tested as the races happen live. And, actually, I'm thinking that gambling on races when you know the outcome already wouldn't be much fun because you wouldn't ever lose. Where's the joy in that?

That's when it becomes pertinent to consider historical gambling's other, far more accurate moniker: instant gaming. The machine picks an old race, and you instantly bet on a horse based on the odds at the original post time. I'm assuming the specific data (names of horses, date, track location, etc.) is scrambled so you can't fire it into a BlackBerry and get the results. Then you instantly see whether you made the right choice and get paid out according to those odds. It follows that same Pavlovian mechanism of instant gratification that scratcher tickets exploit, only with a throughbred racing theme and even more unpredictable expected returns. (Original odds are based on pari-mutuel wagers; applying them to a single bet seems almost arbitrary. The bet, the odds, and the outcome would seem to have an almost random association in instant gaming.) The charade of handicapping involved in this allows track owners to argue (with a straight face even) that instant gaming is a game of skill and not chance, like slot machines, which, incidentally, are prohibited at Virginia's tracks. It's clear why track owners want slots. They want to transition out of the dying business of thoroughbred racing (R.I.P. Barbero) into the always profitable business of straight gaming. These machines would likely be more profitable than straight slots because with the fluctuating odds, it seems like it would be difficult to mandate a reasonable payout percentage. (Slot machines in Las Vegas return around 90 cents for each dolllar played on them; Nevada requires that they be set up to pay out at least 75 percent of money put in.) Players would certainly have a tough time deducing their expected returns.

But reading about this made me rethink which form of gambling is more "dangerous" to the gambler, the game of pure chance, which inspires the average gambler to superstitious flights of fancy, or the game that promises the gambler a chance to use his skills, which probably leads to his overrating his responsibility for success (like the traders Nassim Nicholas Taleb likes to excoriate) and ignoring the amount of chance still involved. Is it better to handicap and lose, or guess randomly and lose? Handicapping horse races (and making sports book bets or playing poker) invites you to stake intellectual capital along with cash, and the former can be hard to recoup. At least losing at a slot machine won't bruise your ego along with emptying your wallet.

The libidinal economy of "Simon Says" (29 January 2007)

Even though it borders on imbecilic, I have long been a fan of the song "Simon Says" by the 1910 Fruitgum Company. I used to think this was because the song resisted any kind of interpretation, and thus hearkened back to a time before the pain of metaphor, which is always a way of inscribing absence. Consider the lyrics:
I'd like to play a game
That is so much fun,
And it's not so very hard to do,
The name of the game is Simple Simon says,
And I would like for you to play it too.

Put your hands in the air, Simple Simon says,
Shake them all about, Simple Simon says,
Do it when Simon says, Simple Simon says,
And you will never be out.

Now that you have learnt
To play this game with me,
You can see its not so hard to do,
Lets try it once again,
This time more carefully,
And I hope the winner will be you.
Lots of bubblegum songs employ pretty obvious double entrendre: "Chewy, Chewy" and "Yummy, Yummy, Yummy"; "Wig Wam Bam" and "Little Willie" -- even prepubescents can probably decode these. But as far as I can tell, Simple Simon is not a metaphor for anything; the song is just about playing Simple Simon, and it gives you the directions. I used to marvel at the restraint the studio musicians who made up this band must have exercised to adhere to such a risible premise and how this restraint translated into a high formalist art -- "Simple Simon" as a pop-music Rothko. But now that friends of mine are having babies, I realize that all music made for infants exercises this kind of literalist restraint, so there must be something more that makes "Simple Simon" stand apart from, say, the songs Barney the purple dinosaur sings. Perhaps it is this: by avoiding any kind of metaphoric possibility, "Simple Simon" wants to absolve you of any ambiguity, any chance of misinterpretation. There is no space between desire and action for anxiety to develop. This suits the subject matter of the song perfectly, as it's about a game that glorifies the art of taking orders, that rewards blind obedience to authority, that structures one's actions as someone else's desire, thus resolving us of responsibility. It promises a world of perfect order, in which one's responses can be completely controlled, in which nothing is involuntary. "Simple Simon" then is about that simple pleasure of total submission, which makes it far kinkier than "I've got love in my tummy." Deleuze would perhaps note the fact that the winner of "Simon Says" is the person who submits most perfectly, not the person issuing the orders, who in this arrangement is deprived of all possibility for joy and reduced to performing rote bureaucratic functions. "Do it when SImon says and you willl never be out" -- does this extend the promise of jouissance to the perfect masochist? Perhaps I need to rethink "1, 2, 3 Red Light" along these lines as well....

Let them eat brands (26 January 2007)

When most people are confronted with the plight of the poor, they are perhaps overwhelmed with sympathy for their sufferings and filled with shame at the world's structural inequities. But when marketers notice the struggle and squalor of the lower classes, they see opportunity: "How can we extract from them what little they have for products we inflate the significance of?" Today's WSJ has a piece about advertisers targeting the poor, called "Marketers Pursue the Shallow-Pocketed." Having saturated the middle-class and luxury markets, what remains untapped is the potential of the poor, who as a class make up for what little they have to spend with their sheer numbers -- a lot of poor people buying a little is just as good as the few rich people buying a great deal. This, anyway, is the pet idea of economist C.K. Prahalad, who peddles the idea that the poor will see improvements in their life if businesses began to cater to them and try to hawk them branded goods. The poor get flattered by the recognition of their special needs and integrated into the market (the only relevant social institution), and the businesses fatten their profits, win-win. It seems constructive when companies seek to give the poor opportunities to subvert some of the inequities of the credit market by offering them alternative means to pay, but offering them an opportunity to participate in brand culture seems counterproductive -- brands are about making the class structure visible, not effacing it. (Though brands associated with the poor would allow some sympathetic middle-class slummers to feel faux solidarity with the poor by using them, similar to how I earned all that street cred and deep understanding of black culture as a teenager by drinking Old English 40-ouncers in the rural Pennsylvania town where I went to college.) And there may be some bumps on the road, as when advertising people see just how destitute the poor are:
But communicating with low-income groups remains something of a mystery for multinational firms. Marketers and ad agencies are full of well-educated and well-off employees who know little about how the other half lives. A trip into slums or lower-class neighborhoods is frequently a "mind-blowing" experience, says Johnny Wei, Nestlé Brazil's director of regionalization and low income.

Another problem, the inconvenient fact that the poor are uneducated. "lliteracy is one big challenge. In Brazil's northeast, Unilever solved that problem by launching a brand of soap called Ala. 'It's three letters, and two are the same,' says Fabio Prado, Unilever's vice president of marketing for Brazil." So apparently the poor are protected from marketing by the inconceivable misery of their lives and their lack of the basic education necessary to participate in the mainstream of public life. Perhaps I should try to remember that when I'm dreaming about being assaulted with fewer advertisements; abject poverty might be the cost of that dream.

More about consumption inequality (25 January 2007)

At Marginal Revolution, Tyler Cowen has a few posts about his skepticism regarding the significance of income inequality. The first is a link to his NYT column that wants to exonerate politicians and businesses of most of the responsibility for the fact that income inequality is worsening (it's 75 percent demographics -- people who have lived longer have had a longer time to have their fortunes go separate ways, and as the populace becomes more educated it is more likely to show the income effects of different lifestyle choices). Inequality is pretty much inevitable if we want to respect the differences in individual motivation, a fact borne out in Cowen's opinion by the fact that measured inequality in happiness among different classes hasn't worsened. Some people are made happy by money, some are made happy by being "relaxed bohemians." Who isn't happy? Only the destitute, who are essentially exempted from consideration in the inequality comparisons Cowen pursues here because inequality is not worsening for them -- perhaps this is because their bare-subsistence situation can become no worse.

That's where his other post comes in, which is a tentative defense of making comparisons of what people consume rather than what they earn. Usually, free marketeers will point to a statistic that shows poor people buying a lot of some inessential good (big TVs, PlayStations, etc.) and declare income inequality insignificant, while implicitly suggesting the hypocrisy of these people with their luxury goods who pretend to be poor and cry for government handouts and lifestyle subsidies from hardworking taxpayers. Thus those people at the bottom, regardless of alleged inequality, are having their lives improved because their purchasing power is increasing with the falling prices of goods, and bare subsistence -- the minimum quality-of-life expectations in America -- now includes many things that were once considered luxuries. (We all have running water and refrigerators now, and the richest of our great-great-grandmothers didn't, this argument goes, so why are we complaining?) In the post linked above Cowen responds to the outrage at this libertarian argument expressed by John Quiggin and Henry Farrell at Crooked Timber. Quiggin's contention is that cherry picking certain consumer goods for this kind of argument ignores the relative prices of all the goods a household needs to survive and perhaps better itself, if it is so fortunate. The fall in TV prices doesn't really compensate for the explosion in health and college education costs, to use Quiggin's example. You could throw in the cost of credit, housing and transportation, too. So the purchasing power of the poor doesn't necessary increase when flat-screen TVs and cell-phone plans become cheaper, and it certainly doesn't imply an equality with middle-class suburbanites who share an appetite for these things. Cowen points out that "it can be argued that 'TVs are not enough,' but that is not reason to reject consumption data out of hand. It is a reason to look at more categories of consumption." This still won't capture the experience of class difference, though, or the impact of inequality -- subjective qualities that always escape quantification in economic statistics. It may be that as you move down the class ladder, such consumer goods go from being taken for granted and requiring no compromises to being crucial aspirational symbols requiring a great deal of sacrifice. The PlayStation3 may possibly hold a greater significance beyond its function for the poor than for the middle-class, but I'm guessing that doesn't really compensate for the class differences. The goods are necessary in fundamentally different ways, too, that cement class differences and limit mobility -- the way the urban poor wear brand-name clothes will not be mistaken for the way the upper classes wear the same brands, and in this distinction serves as a tangible marker that helps keep classes separated. This is true of most consumption goods; the manner in which these goods shared among the classes are consumed seems to define and illustrate class boundaries rather than erode them as the libertarian argument implies.

"Mellow out or you will pay" (24 January 2007)

"Pathologies of Hope," Barbara Ehrenreich's editorial in the most recent Harper's, seeks to throw some cold water on the budding positive psychology movement (detailed in this NYT magazine piece), which she argues is basically a call to narcissistic selfishness, if not more useless self-blaming advice along the lines of Who Moved My Cheese? The "insight" of that slim pernicious volume is that change in business is inevitable and unstoppable (the dumbed-down version of what Schumpeter called "creative destruction") and it is incumbent upon you not to ask why things are changing as they are but to meekly adapt to whatever they happen to be. Essentially you are powerless, the book reminds you, just a rat in a maze, so you should accept the fact that your betters are experimenting on you rather than seek an end to the cruelty. If you accept the inevitability of the situation, you might just be happy within it.

It's no accident that Martin Seligman, the guru of the positive psychology movement, is also credited with formulating the theory of learned helplessness, wherein subjects internalize conditions in which they are deprived of agency and come to feel they are incapable of doing anything meaningful. They blame themselves for things out of their control and think any action they will take will compound failure. This is basically the flip side of positive psychology, which also encourages you to see personal agency where you have none, but rather than developing negative momentum by assuming false responsibility for bad things, you try to develop positive momentum by spuriously assuming unwarranted responsibility for good things. Some of the same misattributions that cause depression can also cause inexplicable baseless happiness (i.e. optimism); basically, emotional cause and effect are presumed to be reversible -- we feel depressed or happy, and derive rationalizations for this afterward.

Of course, that is not how positive psychology is sold to its practitioners. Telling someone to simply pretend to be happy no matter what the circumstances is unlikely to be convincing. Instead happiness gurus emphasize doing good deeds (sending letters of gratitude, aggressively smiling at people) as these promote a feeling of positive agency -- they give the fundamental attribution error something to work with. And you should discover what you are good at and shape your personality around that, to enhance the likelihood of flow experiences, of being "in the zone" and experiencing "mindfulness."

Ehrenrich, a cancer survivor who was infuriated at the constant injunction that she needed to have a positive attitude about her situation to get better, is having none of this. Pretending that positive thinking can magically make miracles happen and remove all obstacles from life seems to her a dangerous illusion, not merely because it detaches a person from reality ("should I assume, positively, that no one is going to cut in front of me or, more negatively, be prepared to brake?") but because "it seems to reduce our tolerance of other people's suffering.... If no one will listen to my problems, I won't listen to theirs: 'no whining' as the popular bumper stickers and wall plaques warn." In other words, positive psychology undermines the effects of sympathy that Adam Smith, et al., found so fundamental to the healthy functioning of a society otherwise fixated on self-interest. If Ehrenreich is right, positive psychology instructs people to ignore the impulse to understand another's feelings and instead impose on them your positive mood by force -- like Rousseau suggested, you will force them to be free. As a more-contemporary philosopher bitterly noted, "Mellow out or you will pay."

Buying autonomy at the check-out counter (23 January 2007)

I'm glad to see someone else take an interest in the subject of self-checkout lines. The Economist blogger comes to some of the same conclusions I did when I posted about this a few months ago after an ill-fated trip to Home Depot, only she is far more sanguine about their usefulness:
Much more important, however, ringing up my own purchases obviates what is, for me, the worst feature of buying groceries: waiting for the checkout girl. (It is almost always a girl.) I can be driven to near-insane heights of irritation by someone slowly counting out the money in her drawer while I wait to pay, or chatting merrily to the customers or other employees. On the other hand, I am perfectly comfortable waiting patiently for my own stupidity to subside. Self-checkout lines may seem to be an imposition by profit-hungry companies, but in reality, they provide an extremely valuable service to those of us who are terminally impatient: they give us the illusion of control.
As my frequent complaints about Duane Reade attest, I completely understand this kind of impatience and can see how doing the checkout labor myself -- even though the cost of that labor is already priced into what I'm buying -- could seem an acceptable fee for mitigating it, particularly when there are no gossip magazines on hand to page through. I don't agree that checkout lines lower prices, though. Competition lowers prices; otherwise money saved on checkout-clerk labor likely goes straight to the store's bottom line. Plus, many customers see the opportunity to bag for themselves as a benefit -- an extension of autonomy, as the blogger herself does -- thus there is no pressure for stores to drop prices as a consequence of such schemes. Better to see this experience of control as itself a product that stores will sell (for the price of the consumer's labor) for as long as they can get away with it. Eventually "autonomy" will feel like inconvenience and impertinence again, and the product will become worthless, and people will seek to buy autonomy in some other arena where there's an institutionally created logjam to circumvent.

Revenge of hope (23 January 2007)

Via BoingBoing comes this fairly elaborate attempt by Keith Martin to make all six Star Wars films make sense as a complete, seamless epic. Martin's ingenious thesis is that R2D2 abd Chewbacca are the pivotal figures in the original three films; we just couldn't see it because they were hiding behind their respective frontmen, C3PO and Han Solo. His explanation is far more interesting than the first three films deserve.

This effort demonstrates how an audience can go to great lengths to salvage the integrity of a product that its makers have compromised, whether out of complacency or laziness, or in Lucas's case, to milk more money out of it. Certain consumers will view inconsistency or inadequacy as an opportunity, for filling in the blanks, for reimagining, or even for comprehensive criticism. This audience ends up rationalizing weak efforts, because the makers of it see an audience engaged with the product and contributing to its bottom line. Of course, audiences won't put this kind of effort into any lousy cultural product; they won't give unknown quantities the benefit of the doubt. But it seems as though that once an audience does grant this leeway, once its hope is invested in certain artists, there is almost no level of failure that will cause it to be rescinded. After all, I went to see Return of the Clones or whatever even after I saw the utterly abysmal Phantom Menace. People kept buying Liz Phair albums even though it was clear her songwriting muse was spent after her first album. I kept listening to Neil Young records even after hearing Landing on Water and Old Ways. I've even argued that Landing on Water is not actually bad; I've rationalized its terrible sound and reactionary lyrics as a kind of sophisticated, conceptual statement.

How do we become invested in certain artists' failure? Force of habit? Perhaps the cult of personality is at work, or the fundamental attribution error (which gives credit to people for things beyond their control). Also, network effects kick in with popular artists that makes attention to their crap worthwhile because you can guarantee you will have fellow sufferers to share your feelings with -- this is how Dylanophiles made it through the 1980s, perhaps. We may enjoy these failures because they through the successful works into relief while humanizing their creators, deepening what we understand of their character and making it easier for us to vicariously enter into their works.

I wonder if there is a certain level of exposure that can be counted on to kindle this kind of residual hope -- perhaps that ratio is part of the way small differences in talent are leveraged into huge increases in earning power for celebrities, along the lines Sherwin Rosen argued in "The Economics of Superstars." Our hope and faith are transformed into their outsize incomes.

Variety shows (22 January 2007)

It's typical to see media critics discuss how we've "disintegrated" into a niche society recently, from some peak cultural moment of homogeneity twenty or thirty years in the past. Since it's become so cheap to publish and disseminate your own work, since independent and DIY product now competes with industry-built entertainment product, since the sector is now an on-demand rather than an appointment-driven affair, since virtually every entertainment product ever made (along with an ever-widening pool of new material being made by the largest cohort of cultural producers in history) is always available for consumption at any given instant and so on, culture has lost the overarching unity once provided by shared reference points -- say, Johnny Carson. What will we say to each other at the water cooler if we are all lost in our own private maze of references? (Chuck Klosterman had a column in Esquire a few years back making this case.) We'll be trapped in our niches, or we'll have to embrace a new medium for friendship online, where we can filter out people until we hit upon those who share our particular taste peculiarities. Even worse, such a plurality might lead to a widespread relativism that says anything or everything should be tolerated or can be enjoyed or can warrant attention, and this attitude threatens to leak from culture into ethics and "family values."

It was strange to see critics lament this loss, since before so many railed against the soul-stifling horror of conformity and the tyranny of mass culture. Early in the 20th century, doomsaying cultural theorists -- people like Ortega y Gassett, the Leavises, Dwight MacDonald, Adorno, etc. -- were generally dismayed at the rise of mass culture, which seemed inexorable, and each innovation in media technology seemed only to consolidate a centralized grip on society's imagination. The studios, the networks, the big imprints, national magazines, the consortiums of radio stations and newspapers and so on, all of these seemed to have a greater share of the public's attention, and the public itself for the most part was seen as unified in its passivity and willing to be molded by whatever entertainment the industry found it convenient to provide. It seems obvious now that this analysis was backward. Were it to exist, the mass audience would provide an irresistibly desirable target for big business and political propagandists alike, and perhaps seduced by the implications of what a mass audience would mean for them, they argued it into existance. Moral dogmatists in America have always tended to lament the impossibility of imposing a unified national culture from above through a media they control to solidify their grips on the "mass" imagination -- on the "hearts and minds" -- and thus on power. (The conservative mind especially warms to themes that limit choice in the name of social rectitude. Limiting choice is a sure way of maintaining control; only recently have the advantages of a surfeit of choice as a means for social control -- the overconsumption/addiction model -- begun to be explored.) And economies of scale once may have implied a mass audience would be easiest to extract profit from, but the graveyard of failed shows and songs and novels and such demonstrates how fickle and recalcitrant that mass can be. In its beginnings, mass media hadn't the means to diversify in order to reach all the disperate audience blocs that have always naturally existed and fully exploit their commercial potential, yet it longed for an easily manipulable mass audience that could be uniformly gratified by its limited offerings and trained to expect more of the same, once marginal costs were reduced and the same crap with the same talent could be turned out. So a tension between the centralization of the means of entertainment and the inherent tendency of people to generate local scenes, if not wholly personal, individual frames of cultural reference became more and more pronounced as media's reach developed and extended, both undermining and enhancing the power of industry and the consumer alike. One result of this dialectic? The Captain and Tennille show.

I'm not sure I can explain why, but I spent the better part of Saturday night watching highlights from the first season of Captain and Tennille, which aired on ABC in 1976-1977. In case you don't know, the Captain and Tennille were Toni Tennille, a singer from Alabama who looked a little like Karen Carpenter crossed with a chipmunk, and Daryl Dragon, an arranger and synthesizer specialist who prior to becoming famous with Toni worked with Dennis Wilson of the Beach Boys on his spooky ballads for the Carl and the Passions album. As the Captain and Tennille, they scored a string of hits by taking established songs and giving them a sheen of gizmo electronica -- barf-bag effects and stray oscillator noises mixed in with some light funk arrangements, of such songs as Smokey Robinson's "Shop Around" and Neil Sedaka's "Love Will Keep Us Together." (They also did a version of America's "Muskrat Love," the tenderest ballad about rodent copulation that I know of.) Perhaps envisioning another Sonny and Cher, ABC gave them their own variety show and stocked it with guests from its own sitcoms and musicians from the L.A. soft-rock scene: I saw "performances" by the likes of Englebert Humperdinck, Leo Sayer ("You Make Me Feel Like Dancing"), Bread ("Lost Without Your Love"), and England Dan and John Ford Coley ("Nights are Forever"). Watching the show now, I found it nearly impossible to imagine how anyone could have watched it then (even though I have vague memories of having been subjected to it as a small child), but as an exercise in historical imagination it was pretty compelling.

It seems important to remember that the show would have been one of essentially three choices viewers would have had for TV watching -- there were no VCRs, no cable channels; you watched what was on the networks or you didn't watch at all. Networks couldn't cater to niches -- they wanted the largest audience possible to choose TV (people may have been less inclined to watch it by default then, have it on as an constant companion, as happens now). Hence the variety show: some singing and dancing, some comedy, and some showcases for performers trying to break out to a larger audience. It's tempting to see the lack of any alternative as leading to TV-production decadence; writers were sloppy, settling for half-baked ideas; costumers seemed to evaluate their work by the budget it required; guest stars acted as though unpreparedness would always come off as breezy charm; musicians faked their way through lip-synched performances; everyone was presumably high out of their mind on coke. But there was a kind of desperation to it as well, a striving for an impossible synthesis that no amount of money or writing or casting could ever supply. Captain and Tennille seemed especially ill-suited to the format -- they were no Donny and Marie, that's for sure. Tennille tried hard to manufacture cheerfulness and commit herself unflinchingly to the unbelievably hackneyed material, but Dragon -- at times indifferent, half-hearted, and palpably uncomfortable wih the contrived aspects of the show, barely bothering to mime his parts on the stack of keyboards he was typically parked behind -- looked like he was pioneering a kind of contemptuous irony about 15 years too early for popular audiences. The show's producers labored hard to build him a palatable personality, giving him an ersatz trademark (his variety of captain's hats, about which he was forced to make unfunny "hat jokes" in one running segment ) and trying to pitch him as a lovable curmudgeon, but he seemed to delight too much in the stubborness scripted for him. The show's formula tried to give a little something for everyone in a family -- dopey comedy for kids (an awful, awful segment called "Masterjoke Theater" that reeked of lazy writing); a sultry number from Tennille, often accompanied by a troupe of ludicrously costumed dancers; a rendition of a song from the current pop charts; some celebrity appearances; something old-timey, like a song with tack piano or a big-band arrangement; all stitched together with weak banter meant to convey the couple's chemistry and comfort with each other.

In short, it was a miscellaneous mess that has to feel pretty misguided to any modern audience, accustomed to shows that target their audience much more minutely. Slipping into moralist mode, I wondered if such shows taught those watching a useful kind of tolerance, a patience, a willingness to be exposed to material meant for others and take it in stride. I wondered if these shows could actually have brought families together (it was late, and I had been drinking). Perhaps it modeled respect for traditional culture, for practiced skills (like singing and dancing, like performing qua performing) whereas now all of that respect has been supplanted with a preoccupation with trend spotting and intentionally disposable youth culture. This line of thinking smacks a bit of technophobia similar to what this Economist editorial identifies, where new forms of entertainment such as video games are demonized simply because they are novel. In their own way, variety shows struggled with the legitimacy of novelty for its own sake: Not only did they attempt to balance mass and heterogeneous culture, the shows tried to tame novelty with a rigorous format and familiar show-biz routines. But the most striking thing about the Captain and Tennille was how it seemed blithely ignorant of youth culture and treated AOR as though it were hegemonic. It made me weirdly envious those who were adults in the 1970s, the last time culture at large respected them. Now 30- and 40-year-olds are expected to try to hang on and keep up with stuff made for teenagers, to pretend or fantasize about being forever 21. Otherwise they must retreat entirely into whatever specific niche they've marked out for themselves. Is it better to be in a culturally irrelevant niche than to belong to a dominant but moribund mass culture? Which is more responsible for the extinguishing of local scenes? Mass culture of the big-three-networks variety or hyperindividualistic culture that renders community activities irksome?

I don't know. I do know that I never again want to see a braless Toni Tennille wearing a satin pant suit and singing "Boogie Fever."

Economics of free (19 January 2007)

Julian Sanchez links to this series of posts about post-scarcity economics, the gist of which is this: ideas (and digital copies of intellectual property) do not become scarce once they are thought of, which means they are not subject to the law of diminishing returns. The marginal cost (what it costs to make one more unit of something) for duplicating an idea is nil, implying an infinte supply of the fruits of knowledge once it exists. (See David Warsh's Knowledge and the Wealth of Nations for a thorough explanation of this -- and the history of theorizing about increasing returns to scale -- and the paper by Paul Romer that brought it to contemporary economics.) The upshot of these posts is that  an "infinite" supply of a good should cause its price to approach zero in the absence of state-granted monopolies and other "artificial barriers" (copyrights) that are becoming unenforceable (but don't tell these people). Whether there really is an infinite supply of anything is questionable (human attention, if nothing else, is not infinite; neither is server space or the energy to maintain them). According to the management consultant writing these posts, this can be a good thing for producers if they focus on selling the medium rather than the information: "You don't sell 'ideas' you sell books, or consulting services, or reports or conferences (or a bunch of other things). You don't sell music, you sell CDs or concerts or T-shirts or access (or a bunch of other things). Basically, you look at the content itself (which is infinite in supply) to sell something that isn't infinite in supply." This, as is pointed out in the comments, is a matter of using captivating content to distract the customer from the fact that he is paying not for that content but an essentially empty package. (The content, infinite in supply, is zeroed out of the exchange.)

Several different commenters made the point that when post-scarcity economics kick in, so does the attention economy
What was missed however, is the premium on end users time - individuals have to deal with the scarcity of time, which forces them to make decisions on which content to spend their time with, or which freeware applications to invest one's time to learn and train on. What is interesting is as scarcity economics starts to fade, network economics starts to take hold. The very best free products will take the lion's share of users attention, which has tremendous value for different economic models. The irony of all of this is it isn't new. Traditional broadcast television lived off of a free to user model for decades, and end users were traditionally faced with the limits of their own time as to which show to watch.

When too much is available at no expense but your time and effort, you can make money by being the filter on the unlimited supply. If you have figured out how to monetize your filter, its to your benefit to have the spigot of free content opened ever wider. (Which explains why Google wants to digitize everything possible.)

But filterers would still need something to filter. Assume that there's not already too much stuff out there and that we need new "innovative" stuff. (I'm thinking of entertainment industry here, not an industry where "innovation" actually  is innovation, like the pharmaceutical industry.) If marginal costs for intellectual property is zero, the fixed costs (what it costs to make the original version, the R&D to come up with the idea) remain, and someone has to pay them. (You don't get a new Metallica record unless someone pays Metallica.) One rather utopian argument is that in the future artists will pay themselves in the sheer joy of creation -- kind of like most bloggers do now. The underlying implication is that anything worth doing in the field of intellectual creation is its own reward.

Another way to recoup fixed costs is via subscription services -- after enough people pay in advance, the musician delivers the new album. (This presents an obvious free-rider problem. Why pay if you are willing to wait for others to pay, and then you'll just copy the product once it's made.) Perhaps artists can go back to finding patrons, as they did in pre-Capitalist times. The Medicis didn't seem to mind everyone reaping the aesthetic benefits from the artworks they sponsored.

A representative from Corwood Industries (18 January 2007)

It's strange to think of the intensely private musician Jandek as an icon of what the Internet has done to music, but I think he exemplifies the phenomenon of how the Web aggregates people around obscure interests and solidifies them, intensifies them, perfects them into a form fit for proselytizing. Thanks to the Internet, Jandek went from impossibly obscure, noted only in a few small impossible-to-find 'zines and in a very, very few passing mentions in the national press, to being intensely and minutely documented, accessible to anyone who somehow became curious about him. For a long time Jandek preserved an almost total anonymity, which amplified the significance of what little information he revealed (through enigmatic album covers, cryptic lyrics and messages scrawled on the catalog sent out by his record label, Corwood Industries). That would seem an awkward, almost contradictory juxtaposition with the way in which the Internet makes massive amounts of information available on just about any subject. But in fact, Jandek records mimic certain notable features of online life -- they often seem spontaneous and feature anonymous collaboration, and they hold out the promise that one can maintain an identity in art than is entirely separate from who you are in real life, that you can use technology to sustain a pseudonym that was nonetheless deeply, harrowingly personal and intimate.

Also, the self-published nature of Jandek's work was a kind of harbinger for what we all take for granted now, that you can pour your deepest inner secrets out into cyberspace and fantasize if you want about a potential audience of millions. Or you can just rest with the notion that you got it out there, whatever you needed to express, and someone might stumble onto it somehow. Almost all of Jandek's work resonates with that feeling of relief at having found an outlet, of having managed to externalize something fraught and nebulous. Just as now one can create avatars that only exist online, Jandek only existed as the sounds captured on tape (until recently, when he began performing live); like Warhol with film, Jandek seemed to be recording the process of his own discovery of what his medium could be made to conjure, what kind of identity it could mediate and emotions could it express when you began with the absolute minimum of skill or polish, when you have no shortcuts, no traditional methods, and no professional expertise to fall back on. (A good example of this is a 15-minute track called "The Beginning," his first using a piano, on which he tries out many different ways to conjure moods and feelings with the instrument without having appearing to have any particular melodies in mind. In fact, most of Jandek's work rejects melody completely, looking for other ways to summon feeling.)

My history with Jandek's music is probably typical: I've been interested in Jandek since I first read about him in Spin magazine in the 1980s while I was still in high school. It's hard to remember how scarce information on music was then and scarcer still were weird records like Jandek's, so I had nothing to go on but a few evocative paragraphs from the "Underground" column describing each of his seven or eight albums at the time, which was enough to implant the name Jandek in my memory permanently. At first it was enough to just know the name. The very idea of a desperate-sounding and reclusive loner self-distributing purportedly unlistenable albums was entertainment enough when I was a teenager, when the despair of others still seemed like a joke to me. It wasn't until I was in college that I first heard Jandek. Most of the record-store aesthetes I began to associate with tended to dismiss Jandek with an attitude reflected in Kurt Cobain's remark about him in 1993: "Jandek's not pretentious, but only pretentious people like his music." You would have to pretend to like his music to get other people to think you were extreme or eccentric yourself. It was considered party-clearing music, again something you would play only for laughs, not something you would actually put on to listen to seriously.

I got a copy of Jandek's 1987 album Blue Corpse either from a thrift store or a cut-out bin, and I probably listened to it a few times, but I didn't feel authorized to actually like it. I found it hard to listen to, almost embarrassing, like watching someone cry in a hospital. And I had no context for what I was hearing either; I hadn't heard any folk-blues then, or any avant-garde noise music, or even the Shaggs -- all essential reference points. Plus it was impossible to to tap into any opinions about it from anywhere or even access basic information about its place in the Jandek canon. There was no accessible community of fans or critics to make listening to that difficult music seem to pay off. So the main use I made of the record was to fill out mix tapes with its short songs and try to impress people with my extensive breadth of musical knowledge -- that was a lot easier to do back then too. Now it almost wouldn't even make sense to attempt that ploy; all the obscurities in the world are at the fingertips of anyone with an Internet connection.

I sort of forgot about Jandek then through the 1990s; I never would have thought he had kept making albums. But after going to see a few outsider art exhibits, I thought of him again, how perhaps his project could be likened to Henry Darger's epic painting cycles or James Hampton's Throne of the Third Heaven of the Nations Millennium General Assembly. And it occurred to me to try something I had just begun to get in the habit of doing: I looked Jandek up on the Internet and found this, Seth Tisue's guide to Jandek, which remains the most comprehensive Jandek resource around. Well-designed and organized, the site made Jandek into a coherent field of study, a discipline, something clearly legitimate. The vast repository of lyrics and album covers not only made it clear that a singular artistic vision was at work but offered a challenge, an invitation to attempt to develop a scholarly mastery of it all. He seems less an anomalous curiosity, clearly no joke. And his recent string of public performances have freed him from his own Salingeresque myth, allowing his work to stand a bit more on its own. It's no longer needs to be understood in terms of that specific mystery, which does nothing to disspell the mysteriousness he never fails to evoke. If you respond to album titles like Staring at the Cellophane or Living in a Moon So Blue, or to covers like these:





... you should probably be listening. They are pretty evocative of the general aesthetic at work, with the possible exception of the mid 1980s noise-rock albums (Modern Dances especially). Whereas before, the information on his catalog was scattered, piecemeal, susceptible to being overwhelmed by isolated moments of uncomfortable strangeness that would prompt me to want to dismiss it, now it's collected together, makes the catalog approachable, legible. We take the context of "normal" music so much for granted -- it ties into familiar pop traditions and the musicians promote themselves in customary ways that have become second nature to us. But Jandek + the Internet = a new way to build essential context for our listening that is free from mainstream distribution channels are dependent instead on the network of individual listeners sharing their enthusiasm and collective passions.

The moral sense and the evolution of advertising, self-love (16 January 2007)

More of what precisely no one has been clamoring for: amateur analysis of 18th century moral philosophy. Prompted by the sudden popularity of Adam Smith's brand of morals (and the argument that it provides an ethical foundation for self-interested capitalism), I read An Enquiry Concerning the Principles of Morals, by Smith's friend and fellow Scot, David Hume. This volume, a reworking of a portion from Hume's Treatise of Human Nature, appeared in 1751, a dozen years before Smith's own work of moral philosophy, and Smith was almost certainly influenced by it. In the Enquiry, Hume is anxious to do two things: (1) refute the Hobbesan argument than men are motivated entirely by self-love and all behavior can be reduced to selfishness of some sort, and (2) rescue moral behavior from religiosity (which he regarded as a source of human strife and division) and metaphysical notions of the soul (i.e., we are good so we can avoid punishment in the afterlife), and found it in the inherent sociability of humankind. Though he is skeptical of the soul, Hume is no strict empiricist, as he is willing to posit an a priori moral sense akin to the one Hutcheson and Shaftesbury argued for. Hume was adamant that emotions and not reason guided our behavior, and that our untutored emotions were basically benevolent.
Though reason, when fully assisted and improved, be sufficient to instruct us in the pernicious or useful tendency of qualities and actions; it is not alone sufficient to produce any moral blame or approbation. Utility is only a tendency to a certain end; and were the end totally indifferent to us, we should feel the same indifference towards the means. It is requisite a sentiment should here display itself, in order to give a preference to the useful above the pernicious tendencies. This sentiment can be no other than a feeling for the happiness of mankind, and a resentment of their misery; since these are the different ends which virtue and vice have a tendency to promote. Here, therefore, reason instructs us in the several tendencies of actions, and humanity makes a distinction in favour of those which are useful and beneficial.
He then rejects separating reason from emotion, and proclaims emotion to be the source of all morality: "The hypothesis which we embrace is plain. It maintains, that morality is determined by sentiment. It defines virtue to be whatever mental action or quality gives to a spectator the pleasing sentiment of approbation; and vice the contrary. We then proceed to examine a plain matter of fact, to wit, what actions have this influence: We consider all the circumstances, in which these actions agree: And thence endeavour to extract some general observations with regard to these sentiments." Reason enters this picture later, to rationalize the judgments emotion has already delivered. This sentiment has the bonus of being unmotivated by self-interest and is thus free of the taint of calculation: "Now as virtue is an end, and is desirable on its own account, without fee or reward, merely, for the immediate satisfaction which it conveys; it is requisite that there should be some sentiment, which it touches; some internal taste or feeling, or whatever you please to call it, which distinguishes moral good and evil, and which embraces the one and rejects the other." He presupposes that virtue must be its own reward, and reasons backward from that to come up with a moral sense that transmits its findings to our consciousness without being distorted by our immediate interests; this allows our instinctual concern for a happy society to override our immediate selfish interests in personal pleasure.

For Hume, "sentiment" and "humanity" are basically synonyms, and both refer to that moral sense that makes us feel something about observed behavior, and our own behavior, as though we were observing it from without (which becomes the basis of Smith's program). Because sentiment is so central to our humanity, those who can sharpen our sentiments -- poets and the like -- become crucial to social life. Here's how Hume puts it:
Virtue, placed at such a distance, is like a fixed star, which, though to the eye of reason, it may appear as luminous as the sun in his meridian, is so infinitely removed, as to affect the senses, neither with light nor heat. Bring this virtue nearer, by our acquaintance or connexion with the persons, or even by an eloquent recital of the case; our hearts are immediately caught, our sympathy enlivened, and our cool approbation converted into the warmest sentiments of friendship and regard. These seem necessary and infallible consequences of the general principles of human nature, as discovered in common life and practice.
This is all well and good -- a classic restatement of the humanist position that art is good for us and can illustrate morality. But I wonder whether this notion, were it widely distributed, would make us collectively susceptible to stimulation and excitation for its own sake, i.e. the blandishments that the contemporary entertainment industry provides (this was one pillar of reasoning behind my aborted dissertation). If anything that exercises our emotions and provokes a sentimental reaction makes our moral sense stronger, then the most sensationalistic materials are justified and should be preferred. Anti-intellectual culture is okay then as long as it moves us. Also, advertisements that manipulate our emotions are performing a good service for us as they persuade. Thus, we might accept that retail-friendly notion that it's pleasant to be persuaded (the way P.T. Barnum assumed his customers enjoyed being tricked and duped) and antisocial to resist the emotional manipulation, vicarious indulgence, and fantasy mongering advertisements inspire. (Notably, Adam Smith seems to correct for this, conjuring the notion of an impartial observer whose judgments we should imagine when evaluating our own moral behavior. The vicariousness remains but is subtly shifted to something more panoptic and less pleasurable.) Anything with strong emotional content is good,for its own sake, regardless what other ends it may be trying to serve. Hume might have believed that nothing morally reprehensible could inspire positive feelings in us, but he wasn't subjected to modern media's power and reach, and the incentives that media provides to subverting our emotional reactions.

In his efforts to separate morals from self-interest, Hume rejects what he deems a false slander on the power of imagination, that it can mask from us the true selfishness that motivates our interests in others. (Hume would have no patience for Gary Becker-like arguments about human capital or the irreducible rational self-interest behind all behavior -- for Hume, behavior is emotional and reactive first, then explained later.)
An EPICUREAN or a HOBBIST readily allows, that there is such a thing as friendship in the world, without hypocrisy or disguise; though he may attempt, by a philosophical chymistry, to resolve the elements of this passion, if I may so speak, into those of another, and explain every affection to be self-love, twisted and moulded, by a particular turn of imagination, into a variety of appearances. But as the same turn of imagination prevails not in every man, nor gives the same direction to the original passion; this is sufficient, even according to the selfish system, to make the widest difference in human characters, and denominate one man virtuous and humane, another vicious and meanly interested. I esteem the man, whose self-love, by whatever means, is so directed as to give him a concern for others, and render him serviceable to society: As I hate or despise him, who has no regard to any thing beyond his own gratifications and enjoyments. In vain would you suggest, that these characters, though seemingly opposite, are, at bottom, the same, and that a very inconsiderable turn of thought forms the whole difference between them.
Here Hume seems to suggest that character ("a turn of the imagination") is inborn and immutable, which corresponds to the quasi-Calvinist notion, ironically enough, of some people simply being born with a maladjusted moral sense, of not being among elect. Hume refers to common sense to dismiss the sophistic idea that virtue is really selfish in deep disguise. But he relies on a fairly subtle argument of his own to ultimately reject self-interest as the source of all (the Archimedian point, the transcendental signifier, etc.):
there are mental passions, by which we are impelled immediately to seek particular objects, such as fame, or power, or vengeance, without any regard to interest; and when these objects are attained, a pleasing enjoyment ensues, as the consequence of our indulged affections. Nature must, by the internal frame and constitution of the mind, give an original propensity to fame, ere we can reap any pleasure from that acquisition, or pursue it from motives of self-love, and a desire of happiness. If I have no vanity, I take no delight in praise: If I be void of ambition, power gives me no enjoyment: If I be not angry, the punishment of an adversary is totally indifferent to me. In all these cases, there is a passion, which points immediately to the object, and constitutes it our good or happiness; as there are other secondary passions, which afterwards arise, and pursue it as a part of our happiness, when once it is constituted such by our original affections. Were there no appetite of any kind antecedent to self-love, that propensity could scarcely ever exert itself; because we should, in that case, have felt few and slender pains or pleasures, and have little misery or happiness to avoid or to pursue.
This is like proto-deconstruction: any self-interested motive needs to refer back to some previous source of enjoyment to have any meaning; there needs to be a self first to motivate self-interest, so self-interest can't be at the root of things. There needs to be inborn motives toward fame, reputation, pleasure, etc., that precede self-interest and motivate us to construct the rest of the preferences that build up our character and our motivations. Then Hume suggests benevolence is one of these human inclinations that precede the formation of a self. We have to be able to love before we can be consumed with self-love, and if HUme is right, than we love others first and thereby learn how to love ourselves.

UPDATE:
Brad DeLong links to this article by V.S. Ramachandran offering a neurological basis for our conceiving of others before developing a sense of self:
It is often tacitly assumed that the uniquely human ability to construct a "theory of other minds" or "TOM" (seeing the world from the others point of view; "mind reading", figuring out what someone is up to, etc.) must come after an already pre-existing sense of self. I am arguing that the exact opposite is true; the TOM evolved first in response to social needs and then later, as an unexpected bonus, came the ability to introspect on your own thoughts and intentions.
Ramachandran's explanation for this is, needless to say, complicated, involving analysis of different kinds of neurons and their functions, but it leads him to conclude "that two seemingly contradictory aspects of self — its the individuation and intense privacy vs. its social reciprocity — may complement each other and arise from the same neural mechanism, mirror neurons." Pushing this further, reciprocity enables self-interest, and then selfishness. We need others to teach us how to love ourselves too much.

The end of music collecting (15 January 2007)

Randall Stosser's article in yesterday's NYT Business section complained about the new iPhone perpetuating Apple's DRM scheme, arguing that the system cripples customers' enjoyment while shackling them to Apple players for perpetuity. This seems self-evident to me, always making me wonder who these millions are who bother with the iTunes store -- impulse buyers who can't be troubled with ferreting out mp3s from other (pirate or otherwise) sources?

Strosser is similarly confused, wondering not only why you would want to amass a collection of music that you can't play freely on whatever system you wanted, but why you'd bother collecting songs at all, when subscription services that offer you the entirety of recorded music are just around the corner.
In the long view, Mr. Goldberg said he believes that today’s copy-protection battles will prove short-lived. Eventually, perhaps in 5 or 10 years, he predicts, all portable players will have wireless broadband capability and will provide direct access, anytime, anywhere, to every song ever released for a low monthly subscription fee.
It’s a prediction that has a high probability of realization because such a system is already found in South Korea, where three million subscribers enjoy direct, wireless access to a virtually limitless music catalog for only $5 a month. He noted, however, that music companies in South Korea did not agree to such a radically different business model until sales of physical CDs had collapsed.
Is this really going to be the future? Shuffle-play the songs you get from subscription services like the ones foretold here and you get something that resembles a somewhat less futuristic invention: the radio. Still, I can understand a subscription model, which would change the mentality of subscribers from a collecting mentality to a playlist-making one -- you become the DJ of your own individualized radio station that broadcasts to you and you alone wherever you go. If you are too lazy, then some sort of Pandora-like software will pick songs you'll like based on the taste profile you help it build. And how easy will it be to let people judge you by your musical taste? Rather than display your collection to them or laboriously type in your favorite bands on a MySpace profile, you can just export the playlists you construct for computer analysis and decoding. This is already going on -- iLike, for example, is a social-networking tool that allows you to spy on other people's iTunes history, what they've recently played, what they've played a lot.

The pleasures of ownership are sometimes hard to separate from the pleasures of experiencing the things we own, since ownership seems to hold the promise of that experience in abeyance. But in the case of music, the subscription model should blow away the clouds of confusion. If this model takes hold and dominates, obviously it would put an end to casual music collecting, and it will make clear once and for all the difference between enjoying music and enjoying collecting. People who collect music may like music, but that's not really what it's about. (The most extreme example of this that I can think of is this guy who collected records and scheduled a methodical routine not for playing them but cleaning them. The idea that he would play them was absurd to him as actually reading a bagged and rated comic would be to a hard-core comic-book collector.) If all the music in the world is available for $5 a month, it will make no sense at all to collect music for the sake of the music.

But it will make plenty of sense to collect if you like collecting; that is, if you like organizing fascinating objects, grooming them, and completing series of things for the sake of it, for the satisfaction that comes with a fleeting sense of finality. You won't have the alibi of really being into music to excuse your obsession, but maybe it won't seem necessary for an alibi -- it hardly seems required as it is. But the coming separation of enjoying music and collecting it hits hard people like me, who thrive on the alibi, who let the dialectic between ownership and experience drive them to keep hearing more, acquiring more. Take away the need to archive, and I may just lapse into listening what I already know and am familiar with. If I don't have to justify keeping something by making myself listen to it first and make an aesthetic decision, I'll go with what I know -- play that John Phillips record again. (Sometimes I romanticize that feeling of being stuck on a record; sometimes it makes me feel stuck in a rut, depressed.) Being a collector drives the pleasures of evaluation over and above those of sensual experience; without the collecting excuse to prefer evaluation over sensual pleasure, it becomes it bit harder to make the effort. I need to collect to keep my taste from atrophying. If subscription services give me everything, I probably would end up wanting nothing at all.

Freecycle and utility traps (12 January 2007)

In Rob Walker's article in last Sunday's NYT Magazine about Freecycle, a Craigslist or Meetup style Internet matching service designed to help you give away stuff you want to get rid of, founder Deron Beal suggests that his organization has been successful because the sacrifice-free generosity it enables makes people feel good.
Whatever attracts people to join, part of what keeps them involved, Beal says, is something they probably didn’t expect: the moment when someone thanks you backward and forward for giving him something you planned to throw away. “There’s a sort of paradigm shift in your brain: ‘Wow, that feels really good,’ ” Beal says. “That’s what I think is fueling this absurd amount of growth we’ve had.”
It seems to me the analysis of this could be pushed a little further. Sure, it's great to earn approbation for being charitable simply by redirecting your trash pickup. And it's pleasing to simplify and streamline your belongings, something that can be surprisingly hard to do because of the responsibility we may feel toward what we buy. Generally we think we are buying stuff with some use in mind, even though the actual motives have more to do with indulging some fantasy about who we want to become or who we might have been. That fantasy is enacted upon purchase, but the purchased object lingers on even after the fantasy fades, cluttering up our apartments -- and our consciousness, when that fantasy curdles into disillusionment and we are embarrassed or taunted by what we wished we were or what the object reminds us we are not. It sits waiting to be use in the way we promised ourselves would when we bought it. My Russian-English dictionary, for instance, always annoys me with how quickly I gave up on learning Russian. But I don't want to throw it away -- not only will that seem a waste in the abstract, but it's an admission of surrender. So I hang on to it, refusing to give up on the utility still entombed within it. That utility, which was the alibi for indulging the fantasy, becomes a trap until we can find some way of dissipating it.

One way is to find some other person to assume that burden, who by taking the object from you is making their own promise to use it in the way you couldn't. This salvages the whole project, allowing you to part with the good without admitting to yourself complete failure. You've just changed one fantasy about yourself (whatever dream was evoked by the object) for another (the pleasure of being a benevolent gift-giver, the pleasure of feeling useful), and it's no longer your fault if the thing is never really used for its ostensible purpose. Giving things away allows us to use them up without having to consume them in the way they are intended to be used. I use up the squash racket not by playing squash and wearing it out but by giving it to a stranger who now is making an implicit promise to play squash, taking me off the hook. So the gift is not as free as it may seem, it comes with the hidden burden of abandoned dreams and the duty to make good on those dreams for someone else as well as yourself. So the goods exchanged through Freecycle threaten to become freighted with serial failures, all the previous owner's as well as your incipient own. But this probably doesn't happen -- giving something away for free seems to remove that thing from the money-for-pleasure cycle that renders the abstract notion of utility seem so important. The gesture, the effort to find a home for something rather than merely trash it, springs you from the trap, and that, I think, explains the unexpected elation Freecyclers feel.

Against happiness (11 January 2007)

In debates about income inequality, this dilemma often arises: Should we divide the GDP pie more equally among society even if that means making the pie smaller? This question is complicated by another observation, that increased income doesn't seem to make individuals any happier in the long run (when, as Keynes noted, we're all dead). If more won't make you happier, then what difference does it make how national wealth is divided? And if growth is all-important, how do we square increasing returns to scale with a meritocratic ideal, which holds that earnings are actually earned? Those who question growth's correlation to happiness often seem to demand more equitable distribution of income, and those who say unfair distributions of income don't matter often seem to argue that personal prosperity is significant and shouldn't be neglected.

For example, at Spiked online, Daniel Ben-Ami has an essay arguing that we shouldn't worry about whether prosperity is correlated to happiness, basically because happiness, in his opinion, isn't particularly important. It's a classic piece of teleological triumphalism (life is always improving) that fetishizes technology (it will magically remedy everything), but I admit, the curmudgeon in me appreciates his position -- yeah, screw happiness. Who needs it? As a wise man once said, "Look at me, I'm making people happy! I'm the magical man from Happyland in a gumdrop house on Lollipop Lane. Oh, by the way, I was being sarcastic." Trying to make people happy is fruitless, since what they think they want is always shifting. Better to make them live longer and give them more technological process (which according to Ben-Ami will also solve global warming, so don't stop wasting energy, no matter what those whiners say). And if the poor are striving to emulate the rich, so much the better. A little envy is good for them; it keeps them in line and keeps them striving, which helps propel social progress. Writes Ben-Ami:
Coveting what the rich have should not be dismissed as unhealthy envy. On the contrary, the fact people are dissatisfied with their lot can be seen as a healthy motive for change. Humanity has historically progressed by constantly trying to improve its position. As a result people are better off than ever before. In this sense unhappiness should be welcomed. It is a sign of ambition and a drive to progress rather than one of inherent misery. In contrast, the essentially conservative message of the happiness gurus is that people should be happy with their lot.
Progress? It's always positive. Ambition? It's the fire that tempers the steel in your soul, never let it die. Disappointment? It's really a reward since it should encourage you to try harder. If you are discouraged by failure or depressed by relative stasis after all your struggle, then you are obviously a weak person who is opposed to human progress and perhaps a traitor to your species. People who tell you that you can be "happy" are also secret enemies who think there is something valuable in nature as it is and in being present in the moment. Don't be seduced! If you forget that you always need more than you have, you might stop paying attention to what society expects from you: more hard work, more desultory consumption. You are not here to "feel" "good" -- you are here to struggle and suffer for the heroes of posterity (just like Soviet citizens in the 1930s). Without these things living standards -- measured in income and technological dominance over our environment, not foolish trifles as your insignificant "feelings" -- will slip and your children will hate you.
The rise of mass affluence is an incredibly positive development. It has bolstered the quality of people’s lives enormously. But there never was any guarantee that such progress would bring happiness. One of the most positive qualities of human beings is that they often want more than they have got. They typically want the lives of their children and grandchildren to be better than their own. The growth sceptics would have us stay where we are or even retreat to living a life of lower living standards.
You must be dissatisfied so that your children can be too. Happiness is obviously just another word for surrender.

Anyway, I agree with Ben-Ami that the government has no business trying to make people happy, not if we want to respect individuals' right to determine what happiness is for themselves. But the income inequality problem and controversies about growth aren't about happiness; they are about fairness, justice. That's perhaps no easier to define than happiness (is it equal opportunity or equal outcome?) but we shouldn't let the nebulousness of happiness distract us from its importance. It's not like we'd ever declare prosperity is more important than justice, would we?

iPhone? iDontCare (10 January 2007)

I'll admit I don't understand all the fuss about Apple's forthcoming iPhone, which just seems like an overpriced BlackBerry to me. (Like I know -- I've never had a cell phone or any gadget that gets emails or text messages. I received a Palm Pilot for Christmas one year in the 1990s and I never took it out of the box -- it seemed inherently inferior to my preferred PDA, a spiral notebook. So you can discount my opinion accordingly.) The iPod was successful mainly because it created its own market for mp3 players. But people already have phones and they probably aren't going to adopt Cingular and abandon their current service (usually contracted for years and with penalties for breaking it off) just for whatever minimal cachet comes from being an iPerson. (And I know the more I see that douche-bag hipster in Apple's commercials, the less I personally want to be one -- an iPerson that is; I'm probably a douche bag already.) And those who for whatever reason need to be reading email while walking down the street, driving their car or sitting in a restaurant already can, and they are already accustomed to twiddling their thumbs on the little keypads -- they probably won't want to switch to the touch screen interface Apple's peddling here. Apparently the hope is that some suckers will want iPhone because the DRM-crippled iTunes collection they've already amassed will play on it. And then perhaps some of these people will become so enamored of the OS X style interface, they'll start buying Mac PCs. However, my guess is only Apple cultists buy this particular gadget in the first place.

All that being said, I don't know why I feel invested in this gadget's success at all. I have the feeling that the iPod, like cultural phenomena like Nirvana and Pulp Fiction, served to mainstream a certain kind of hipsterdom that seems like a parody of ideals I once held, and I guess I'm still bitter about that. Apple's whole business model seems predicated on coolness, the same way Tiffany is based on exclusivity and snobbery. I'm in favor of less snobbery; I hope the next revolutionary gadget will expose all gadgets to be interchangeable commodities, with nothing going for them but their functionality. I can dream, can't I?

Brand equity and class warfare (10 January 2007)

Sometimes your money really isn't good enough. Here's something to remember the next time you hear an argument that extols how purchasing power is emboldening democracy: Today's WSJ has an article about Tiffany's strategy to alienate existing customers -- trend jumpers, teenagers, aspirational lower-class folk; the wrong sort of customers, apparently -- and the huge profits they brought the company and shareholders in order to make the brand as a whole seem more exclusive. The company believes its image is ultimately more valuable in the long run than whatever profits it surrenders by pricing its once popular line of silver jewelry out of the reach of the little people: " 'The large number of silver customers did represent a fundamental threat -- not just to the business but to the core franchise of our brand,' says Tiffany CEO Michael Kowalski." The company was frustrated that its initial price raises couldn't scare away enough consumers, so it boosted prices again and again until demand was finally quelled.

Tiffany is declaring essentially that it's more important to make slim profits by selling to rich people than to make big profits selling to everyone. "Like a growing number of publicly traded luxury-goods makers, Tiffany is attempting to walk a razor-thin line: broadening offerings to the upper-middle-classes while pitching privilege to the truly rich. The dilemma is particularly common these days, as investors clamor for sales growth on one side and fickle luxury buyers demand exclusivity on the other." This is pitched as a reasonable long-term strategy, but what Tiffany is trying to preserve is not profitability but a class structure that it has positioned itself to police. In our democracy, the state has forsworn much of its traditional role of conserving privilege to the people who already have it. This opens up the field to capitalists. In the absence of a repressive state enforcing castes, companies like Tiffany spring up like rent-a-cops to do the necessary policing of class boundaries, controlling supplies of positional goods and keeping the lid on aristocratic social capital. Jealously guarding its supply of exclusivity -- which is valuable less in monetary terms than in its priceless, near timeless, significance to class antagonisms that predate the cash economy -- Tiffany will do what it can to make sure that its fine, upstanding name is not used to give the hoi polloi any dignity. Apparently, preserving that class gulf is more valuable than any cash profits could ever be.

Adam Smith's moral philosophy (9 January 2007)

There seems to be a movement afoot among conservative thinkers (okay, maybe it's just P.J. O'Rourke and economic journalist David Warsh) to rehabilitate Adam Smith (as if this were necessary), protect him from various accusations of shallow and simplistic thinking about human behavior and present him as more than a mere tool for capitalist ideologues by directing attention at his earlier work, The Theory of Moral Sentiments, an analysis of what motivates people and organizes social formations. This attempt may be part of a broader initiative to create a body of conservative theory humanities students can draw on to counter the leftist thinkers typically enshrined in English departments with whom one must be familiar to be taken seriously. Latching on to conservative thought (the most sophisticated and interesting of which is perhaps embedded in classical economics texts) may become a new way for post-docs to distinguish themselves and sell themselves on the ever-tightening job market; it makes one stand out in a way that would likely appeal to administrators if not old-guard professors.

I became interested in Moral Sentiments because I was researching sentimentality as spectacle/experential consumer good in 18th-century literature -- Sterne's Sentimental Journey is the most obvious example of this, where the narrator travels around in search of pathetic spectacles that can move him to a pleasing feeling of pity and condescension. It seemed to me that a specialized jargon, learned from reading novels, was necessary for this transformation of human feeling into commodity to happen. People sought these pity parties, perhaps, because of the popularization of one strand of moral philosophy that postulated an inborn "moral sense" akin to touch or smell that responded solely to moral and ethical stimuli. Shaftesbury's Characteristics is perhaps where this notion gets its most definitive expression, though all the "man of feeling" books ultimately borrow from him or his follower, Scottish philosopher Francis Hutcheson. Since humans are endowed with innate moral sense, we can automatically detect right from wrong instinctually -- thus humankind is basically benevolent, not the malevolent brutes of Hobbesean's scheme. How do we know good? We feel it as a quasi-Platonic form of beauty: Beauty and virtue are inherently aligned, thus one can demonstrate one's virtuousness by becoming a connoisseur of beautiful things. The man of feeling is essentially a connoisseur of beautiful sentiments -- pity, sacrifice, charity, etc. -- instrumentalized to demonstrate an inner worth that justifies either his social position or his right to social mobility. The moral sense also manifests in polite behavior, which is the public, social expression of the moral sense in action -- this transforms the upper class habitus into an elaborate demonstration of inner nobility and the inherent refinement of the aristocratic inner character. The commoner's lack of understanding of morals is not so much a lack of training as a lack of innate moral sense.

Since our instinctual emotions are benevolent, the stronger we feel them (and thus the more ostentatiously we display them) the more virtuous we are. From this point of view morality is a reaction rather than a process of judgment. There is no need for a method of moral reasoning; what's needed instead is practice in responding immediately with one's heart in the proper histrionic way to various highly emotional events -- death, the suffering of the poor, the orphaned children, women in distress, abandoned women, etc. Society life thus became a systematic pursuit of occasions to exhibit one's moral "sensibility" as it soon became known. Sensibility was similar to the Renaissance notion of sprezzatura, an instinctual charisma and propriety, but was more passive, with more emphasis on finely wrought feelings in response to witnessed situations. Sensibility is a matter of spectatorship and reaction -- which is what relates it to modern entertainment industry, which thrives of passive spectatorship and the delectation of contrived emotional experiences for their own sake.

Hence the development of the commercial novel: Novels capitalized on the fantasy of being able to find oneself in situations that called for strong and unrestrained expression on one's emotions and offered many opportunities to demonstrate a "feeling heart" by vicariously identifying with the fictional character's sufferings. Weeping over a book, for a time, served as proof of one's goodness, was seen as a kind of emotional charitableness. Novels became testing devices -- if your heart didn't respond, your moral sense might just be weak and you might not be as moral as you hoped. Of course, the converse always happened -- it was all too easy (and satisfying) to let the novel provoke sympathetic tears and prove your inner worth (a process which in turn provoked much mockery from skeptics -- often dramatists, essayists and reviewers). Every aspiring novelist (and readers too) learned the grammar of emotional prose, the key words and scenarios which were to trigger feelings in the reader.

What permits this whole system was the magical property of sympathy, by which we automatically relate to another person's feelings, feel them ourselves and act accordingly. Adam Smith's Moral Sentiments ties into the story here, because it adopts the property of sympathy (defined as an instinctive, vicarious appreciation for the observed feelings of others -- the natural and irresistible human ability to put oneself in another's shoes) as its fundamental principle. This differs from the innate moral sense in that Smith sees sympathy as a product of reasoning your way into the feelings of others through a comparison with what you yourself would feel in the same situation. And what is good is a matter of consensus -- intimations of what will become "spontaneous order." Following Hutcheson, Smith's innovation was to attempt to fuse the Hobbesan view of man's innate selfishness with Shaftesbury's view of human benevolence and yield enlightened self-interest. Because we can't help but feel what others feel, it becomes part of our selfish interest to make them feel good -- Smith even opens his work with the observation that despite what may be convenient to us, our own feelings are invariably mixed up with what we perceive of others' feelings, so essentially we have an inescapable interest in the emotional lives of our fellow men.
How selfish soever man may be supposed, there are evidently some principles in his nature, which interest him in the fortunes of others, and render their happiness necessary to him, though he derives nothing from it, except the pleasure of seeing it. Of this kind is pity or compassion, the emotion we feel for the misery of others, when we either see it, or are made to conceive it in a very lively manner. That we often derive sorrow from the sorrows of others, is a matter of fact too obvious to require any instances to prove it; for this sentiment, like all the other original passions of human nature, is by no means confined to the virtuous or the humane, though they perhaps may feel it with the most exquisite sensibility. The greatest ruffian, the most hardened violator of the laws of society, is not altogether without it.
This creates a kind of proto-panopticon scenario: Since how we all affect each other by the kind of feelings we display, it behooves us to imagine someone is always watching us, and to act with that cool impartial observer's likely reactions in mind.

The process of continual comparison with others, along with the assumption of a constant observer who embodies our ideal (likely modeled on the sort of person society most praises -- the privileged person with all the manners of polite society) provides a rationale for what will become, in Veblen's terms, invidious comparison, where one measures one's own prosperity against that of peers and yields perpetual dissatisfaction -- the hedonic treadmill. We compare ourselves to those above us (or now, the lifestyle celebrated in ads and entertainment) and ceaselessly strive to catch up to it, but it keeps moving beyond us. In generalMoral Sentiments supplies a guide to the psychological framework necessary to sustain not just a capitalist society but a consumer society; it advocates conformism and a habit of spectatorship as premises of its moral system, and tries to rationalize selfishness as reasonability. It also promotes endless striving for the approval of others on the grounds of material comparisons (once emotions are reified) as the meaning of life. It's not too hard to see the ways these ideas have mutated and survived in our own culture.

Taste arguments (5 January 2007)

Sometimes friends ask me what music I've been listening to, and I always feel lame when I tell them, "Justin Timberlake." I put a brave face on it and try to make no big deal out of it, change the subject. I certainly don't expect anyone to congratulate me for it, as pop-oriented music critics some times seem to expect in their columns when they present their embrace of top 40 as some kind of radical position, as if they'd just endorsed Lyndon LaRouche for president or something. My suspicion is the vast majority of pop-music consumers are not especially reflexive about their tastes and enjoy music that much more for it. They are operating from pure praxis; whatever rationale drives their taste has become completely automatic.

Of course, some assume that means there is no rationale, and they are mindless sheep consuming whatever they are told, responding mechanically to hype -- a very seductive position, because once you come to this conclusion, you have exempted yourself and transcended such sheep, making your tastes (even if -- perhaps especially if -- they are for misogynistic rap or Satanic metal) automatically a sign of your higher consciousness. This position makes solipsistic thinking about yourself, what you like and why you like it, a supposed signal of your analytic prowess and your nonconformity and superiority to the mass -- narcissism becomes a sign of genius. Arguments about musical taste are inevitably about the participants seeking recognition for their individuality; by persuading someone else to concede your taste, we vindicate our right to our own opinions. The tragedy is that we are ever convinced that we don't already have that right; the process by which we are inculcated with that notion sadly goes unexamined.

Pop music's popularity can generally seem as though it requires no explanation. Its "goodness" seems self-evident -- just look at the sales figures. Then many questions go unasked: Why this kind of music now? What enabled it to give pleasure to so many people? What does it deliver along with the pleasure? What systems secured the mass exposure the music required to become popular? etc. These questions depersonalize music because they reveal that what songs actually sound like is ultimately insignificant to the economics of the entertainment industry and expose "pop-music taste" as a red herring. But this taste is a crucial tool in self-definition within a mass culture -- it differentiates one while simultaneously making a case for one's belonging to a group; it lets you conform and be different at the same time, which resolves one of the fundamental contradictions we confront. So critics and listeners alike reject such questions and get defensive when they are asked. ("I don't have to defend my taste to anyone"; "I'm not a robot consuming automatically what music companies spit out; I identify what is truly great." "The best pop music is art, and here's why...") The questions are threats to our sense of individual autonomy; the aesthetic is our cultural system for protecting that sense, even if it is illusory. Shifting discussion to the inherent quality of purposely disposable music (ie, arguing that something about the song itself has made it popular or great rather than the conditions in which it was produced and distributed) masks its disposability, and more important, ours as well. (Side note: consider how much pop music is about the singularity or indestructibility of some unique and timeless love.)

Defensiveness about materialist dismissals of the significance of taste protects us from contemplating what may be an irresolvable existential condition of participating in a society, how to partake of social benefits (e.g., everything that commercial culture produces without having us specifically in mind) without dissolving into a crowd or becoming a mere number to that society. As much as we talk about shopping to construct identity, it is also at the same time a self-annihilating process in which we admit at some deep level that we are willing to conform our desires to ones anticipated in us by manufacturers that know nothing about us, and to the desires of thousands or millions of others who are making the same admission by buying the same product. When we enter consumer society, we surrender or suspend much of the pretense of our uniqueness; then we struggle to get it back in the process of consumption. One way to do this is to build arguments for our tastes, to try to find a unique reason for being a Justin Timberlake fan. But really, there must be better ways for me to distinguish myself than that.

Addendum: This cartoon is a more concise exposition of my argument.

Computerized scheduling and on-demand consumerism (4 January 2007)

Wal-Mart dominates the retail sector because it perfected just-in-time logistics (only as much inventory as will be sold), minimizing overhead costs and allowing the company to charge lower prices. This in turn attracted more customers, which eventually gave Wal-Mart the enormous scale of operations that allows it to bully suppliers and dictate its own terms to them in order to hone its supply-chain logistics even further -- a tidy little feedback loop. Now Wal-Mart hopes to improve its bottom line by treating workers, whom it has vigorized preventing from unionizing, in the same way it treats inventory, employing them on a just-in-time, as-needed basis. This WSJ article by Kris Maher, which is surprisingly sympathetic to the worker's point of view, has the details:
Staffing is the latest arena in which companies are trying to wring costs and attain new efficiencies. The latest so-called scheduling-optimization systems can integrate data ranging from the number of in-store customers at certain hours to the average time it takes to sell a television or unload a truck, and help predict how many workers will be needed at any given hour....
But while the new systems are expected to benefit both retailers and customers, some experts say they can saddle workers with unpredictable schedules. In some cases, they may be asked to be "on call" to meet customer surges, or sent home because of a lull, resulting in less pay. The new systems also alert managers when a worker is approaching full-time status or overtime, which would require higher wages and benefits, so they can scale back that person's schedule. That means workers may not know when or if they will need a babysitter or whether they will work enough hours to pay that month's bills. Rather than work three eight-hour days, someone might now be plugged into six four-hour days, mornings one week and evenings the next.
In this post, Brad Plumer elaborates on the employee hardships Maher mentions: "Another problem, of course, is that 40 percent of Wal-Mart's employees will soon be part-time workers. Many of them—and many of the full-time workers, too—need to find second or even third jobs to make ends meet. Of course, it becomes near-impossible to find another job when you have to sit around 'on call' and can't predict your schedule from week to week. Ah, but at least the Bureau of Labor Statistics can record an uptick in 'productivity,' and economists can then sit around and wonder why median wages aren't going up too. So it's all good..."

Wal-Mart defends this by reminding everyone how great this will be for customers.
Wal-Mart spokeswoman Sarah Clark says the system isn't intended to schedule fewer workers, and hasn't where it has been implemented so far. The company says that in one test last year in 39 stores, 70% of customers said the checkout experience had improved. "The advantages are simple: We will benefit by improving the shopping experience by having the right number of associates to meet our customers' needs when they shop our stores," Ms. Clark said.
But what about workers, whose lives will be made much more insecure? "Some analysts say the new systems will result in more irregular part-time work. 'The whole point is workers were a fixed cost, now they're a variable cost. Is it good for workers? Probably not,' says Kenneth Dalto, a management consultant in Farmington Hills, Mich.

Probably not? Of course it's not good for workers. It only amplifies the chaos in the already often chaotic lives of the poor. As Jonathan Cobb argues in his afterword to The Hidden Injuries of Class, class is a matter of being reminded that your time is not valuable, not nearly as significant as other people's time, other people who presumably do something much more useful with it. If you are poor, lower class, you can always be made to wait. If you are important, you can have things "on demand." Wal-Mart is telling its employees that the time of every single person who comes into a Wal-Mart store is more valuable than that of those it entrusts to serve those customers. Of course, any of us can become one of those customers and suddenly feel important, but that is the deeper charade at work -- that we will ever be able to buy dignity and self-respect by being a consumer rather than earn it by doing meaningful social work. This development makes it plain how improvements in serving the customer are typically translations of ways of screwing the worker (who is essentially the same person). On-demand consumerism, then, is compensation for how our time is routinely demanded of us; the more on-demand consumerism we expect, the more we accept unreasonable demands on our time from our employers.

Hayekian literary studies (3 January 2007)

I keep suspecting MLA types will eventually seize upon Hayek for fresh philosophical underpinnings sufficient to generate new readings of the lit classics. This would perhaps satisfy increasingly whiny right-wing critics of academia's liberal bias (Michael Berube's dismissal of that myth notwithstanding) and provide a new direction for theory to go now that the profession is "beyond" or "after" theory. Sure enough, Emory University English professor Mark Bauerlein recently published this article in the Chronicle of Higher Education about the alleged liberal conspiracy against Hayek in favor of Foucault (who, strangely enough, in his latter days actually went around encouraging people to read Hayek, whose ideas about spontaneous order resemble Foucault's account of power dispersed in institutions.)
While Hayek's defense of free markets (for which he won the Nobel prize in economics in 1974) influenced global politics far more than Foucault's analyses of social institutions like psychiatry and prisons, the two thinkers enjoy contrary standing in the liberal-arts curriculum. Hayek's work in economics has a fair presence in that field, and his social writings reach libertarians in the business school, but in the humanities and most of the social sciences he doesn't even exist. When I was in graduate school in the 1980s, a week didn't pass without Foucault igniting discussion, but I can't remember hearing Hayek's name. In those heady days of politically framed cultural criticism, academic intellectuals formed a vanguard of cosmopolitan insight and ideological unmasking (so they said), but their range of reference fell short.
Bauerlein concludes that "it would be healthy for everyone if the academic curriculum broadened its scope, if the lineage of conservatism were consolidated into a respectable course of study — that is, if Hayek won one-tenth the attention that Foucault receives." He imagines such a course in conservative thought would build up from Burke and Tocqueville to such contemporary luminaries as Harvey Mansfield (author of much-derided book about manliness), cultural-literacy dogmatist E.D. Hirsch, and Dinesh D'Sousa, whose most recent book blames the "cultural left" for 9/11. Wow, the promise of such a course is almost enough to make me wish I was a graduate student again. (Not really.) When I was in school, my sense was that English department conservatives wanted to teach literary appreciation courses in the established classics and couldn't fathom why students wouldn't want a warm bath in the luxuriance of the great works. These people were against ideas generally (and wouldn't have ever even considered the possibility of praxis) and preferred subjective pronouncements about aesthetic quality backed up by tradition. The entire profession of literature studies for them seemed to be about deciding which works were "great." This led me to think aesthetics themselves were a conservative conspiracy (a view which Eagleton's Ideology of the Aesthetic did much to foster).

But a familiarity with philosophical underpinnings of modern capitalism -- via classical economists Adam Smith and Ricardo and more recent apologists like Milton Friedman -- to balance the Marxist critiques that often are introduced in literary theory and cultural studies classes would probably be a good thing. It's no good citing Marxist theory without understanding which parts of it are generally held by all credible economists to be bunk. And I think that commercialism and the logic of business has a lot more to do with literary developments than the various romantic mystifications of genius and aesthetic innovation.

Anyway, I had been wondering what the trojan horse might be for smuggling Hayek into cultural studies programs. This essay is a start: Reason's blog points readers to this article by Paul Cantor, which applies Hayek's notion of spontaneous order to television-show development and exemplifies what Hayekian literary studies might look like. (Obviously I'll have my eye out for the forthcoming Literature and Economics: Studies in Spontaneous Order, which Cantor co-edited with Steven Cox as well.) Cantor asserts that falling back on spontaneous order is a good way of skirting the all-too-common (I do it all the time) logical inanity of attributing agency to art works that don't lend themselves to the kind of close reading that ascribes authorial intention to every minute choice -- things like TV shows and Shakespeare's plays, since these were likely shaped in performance and written down later. Spontaneous order can be seen as variation on the Romantic (and New Critical) ideal of organic form, which evolves dialectically in regard to content so that they suit each other perfectly. And better yet, this view demotes the lone genius working in opposition to society and replaces him with a celebration of collaboration, of art-making as not a mystical process reserved for special people (rich, elite, overeducated) but as a quotidian process of ordinary people pooling and specializing their talents. "The idea of spontaneous order always seems counterintuitive to us; as human beings we evidently are conditioned to attribute order to an individual orderer. That is why the ideas of both Smith and Darwin (not to mention Hayek) encountered so much initial resistance and are rejected to this day by many people. But if one recognizes the various kinds of feedback mechanisms at work in popular culture, one begins to see that it is possible for it to lack a centrally ordering agent and yet be self-regulating and self-perfecting." It's the last part that likely causes the most trouble -- not only are we reluctant to grant agency to the workings of an unmanaged system, but we are unwilling to accept that as the best of all possible results on account of there being no self-interest director (or state apparatchik) orchestrating it all. And feedback loops and spontaneous orderings often yield nonoptimal results -- American Idol, for instance. But yet it may be nonoptimal only to my elitist aesthetic. Cantor cites this cautionary advice from literary critic Franco Moretti: "If it is perverse to believe that the market always rewards the better solution, it is just as perverse to believe that it always rewards the worse one!" Actually there is nothing perverse about such a belief: Disdaining what's popular (and what popular taste has shaped via the market) is a sure way of protecting the power that derives from your intellectual capital -- you believe that judging what is best requires that special training that you, fortunately enough, have managed to acquire. Aesthetics are a disguised way of exercising arbitrary power, and markets thus seem democratic because they democratize the aesthetic, or make it something collectively decided. But the market is no panacea; it's distorted by the different advantages (more money, political capital) participants bring to it. You must have the capital (the connections, the money, etc) to get your TV show made before spontaneous order can begin to perfect it, and that capital already embeds decisions that have nothing to do with what might have been spontaneously demanded. In other words, we still fight over control of where to fix the starting points and parameters within which market processes, creative and liberating as they may be, will work.