Showing posts with label behavioral economics. Show all posts
Showing posts with label behavioral economics. Show all posts

Monday, August 15, 2011

Pattern recognition in social media (21 Sept 2010)

Esther Dyson makes some scary points in this article touting a new service called Digital Mirror, which analyzes your online activity and provides reports about your patterns of interaction -- who you talk to, how you talk to them, who you avoid, who avoids you, etc. I'm sure all the data-driven, self-obsessed freaks out there are thrilled, but what's scary to me is the fact that this kind of analytical surveillance is already becoming customary in social networks. Dyson writes:
Facebook and other social tools operate under the covers: Facebook notices which friends you interact with and whose photos you comment on in order to select the items in your NewsFeed or the ads you see. But Facebook does not show that information to you. Digital Mirror does.
Within a few years, this kind of transparency will probably be commonplace, both from Facebook and from ad networks and behavioral targeters trying to derive information about your likely purchases. But right now, only Digital Mirror is one of the few to give you the ability to do the same for yourself.
How perfectly awful is that. Rather than open our horizons, our own activity online serves to close them off, thanks to these "services" operating in the background. We think we are indulging our curiosity but we are merely blueprinting our own personal prison, and helping sketch the floor plan for the cells of our friends. At least all the ads we'll see while we are trapped there will be relevant.

Friday, August 12, 2011

Permission to Consume (23 July 2010)


In this interview of Andrew Potter (whose book, The Authenticity Hoax, I still really need to read), he makes an interesting point about what sort of rhetoric supplies us with the permission to consume. To put that it in a more jargony way, what sort of ideological climate is necessary to naturalize a consumerist orientation -- make us see shopping quests and the accumulation of stuff as normal, inevitable personal goals?

Behavioral economics and marketing research explores some practical aspects of generating permission to consume. First of all, they acknowledge such reluctance exists, and that wants are not simply inherently infinite, as neoclassical economics assumes. That is, they accept that creating demand is an actual problem (sort of obvious, otherwise marketing as a discipline wouldn't exist, but don't tell that to a Say's Law zealot). And then there are nitty-gritty behavior studies of the reluctance people have in pulling the trigger on purchases and how to overcome it, the sort of research Paco Underhill and other marketing gurus proselytize about -- how to create the appropriate buying environment with music and positioning of goods and so on; how to counteract optional paralysis; how to weaken our psychological defenses to persuasion.

But there is a larger sense in which we are reluctant to consume that goes behind jittery resistance at the point of sale. Potter maintains that "we have a deep cultural aversion to buying things on the open market. We think we live in a consumer society, but we don’t. We live in an anti-consumer society, which is why we feel the need to “launder” our consumption through a moral filter." That cultural aversion stems from the way market interactions depersonalize us. In the "open market" we are all interchangeable consumers, measured in terms of the greenness of our money. We are nobody special. Still, we want the process of exchange to respect and reflect our unique personal worth. Consumerism must compensate for that loss of dignity inherent in market transactions. One of the ways it achieves this is the ideological cant of customer service -- the customer is important, always right, etc. Another, more far-reaching way consumerism compensates for capitalism is through its promise to constitute our identity anew, allowing us to establish an illusory ontological security by tapping into a pattern of shared meaning through goods.

So I think Potter goes too far in saying ours is an anti-consumer society; consumerism is certainly the governing, hegemonic ideology for most of us in our everyday life. Consumerism supplies the solution to most ordinary problems -- problems which it helps give coherent shape to in the form of concrete needs for products: have a "need" (in quotes in deference to Baudrillard)? Buy something! Need purpose? Collect and curate! And to adopt Potter's point, ethical purchasing and crypto-authenticity is a new solution consumerism supplies for the problem of anomie. Feel unreal? Buy something green! Buy something cool!

Like most consumerist needs, the need for authenticity is generated internally within the system of consumerism, which then endeavors to sate it. It doesn't refer to an actual ontological need for some sort of real self-knowledge. Consumerism, as part of the modern social order, makes us conscious of our identity as something contingent and contrived, and it suggests that this fact should make us anxious. Then a series of solutions are offered, themselves contingent on changing fashions and the matriculation of trends through status hierarchies.

Some critics of authenticity blame the rise of Freudian depth psychology -- the idea of real selves hidden behind the veil of the unconscious -- as the root ideology that fuels the consumerist drive (to use psychoanalysis's term) and spawns the concept of the packaged lifestyle. (See Adam Curtis's disturbing BBC series, The Century of the Self, for example.) We are persuaded that finding the real self entails ongoing experiments with a variety of experiences, which ultimately become reified commodities. Other sociologists and anthropologists see it as a reflection of the decay of stable traditional markers of identity under the pressure of changing technologies. As we become more mobile, both geographically and in terms of class, personal identity is no longer ascribed but becomes open-ended, something we make for ourselves but which is never final or fully accepted but most be tested and approved socially, over and over again. The arena of consumerist display becomes one such place for this identity proving.

But to return to Potter's point about a "moral filter": how it became morally permissible to consume over the course of the 18th and 19th centuries is a subject that sociologist Colin Campbell (not the New York Rangers' ex-coach) explores in The Romantic Ethic and the Spirit of Modern Consumerism. The gist is that consumerism doesn't come naturally to us; it's an attitude that must be inculcated, reproduced. That remains true; consumerism must be perpetually reenchanted in the face of its obvious shortcomings and dangers.

Campbell argues that romanticism and the 18th century cult of sensibility (which saw virtue as the ability to demonstrate deep emotional responsiveness) bequeathed us a general, Walter Mitty-esque ethos of daydreaming for personal fulfillment, which allows for us to take pleasure in fantasies about ourselves prompted by consumer goods imbued with emotional overtones. Campbell writes, "The central insight required is the realization that individuals do not so much seek satisfaction from products as pleasure from the self-illusory experiences which they construct from their associated meanings." But as a result, we take less pleasure in the actual consumption. So for example, we delight more in thinking of ourselves as the kind of person who will eat unpasteurized artisinal cheese than we do in tasting it. Or more radically, our anticipatory delight steals the pleasure from the sensual experience. Consumerist pleasure cannibalizes the pleasure inherent in things, preempts it. This leads to the chronic dissatisfaction with the stuff we have and the perpetual urge to want more -- conveniently enough for an economy structured around ever-growing consumer demand.

But the moral permission to consume, he suggests (if I am remembering right), derives from the quasi-religious idea that a rich inner life is proof of having a strong intuitive moral faculty (which in turn reinforces one's sense of being one of the elect, in the Calvinist sense). The cult of sensibility disseminated the idea that emotional responsiveness indicated a noble soul over and against aristocratic tradition that located nobility in bloodlines. Consumerism became a means of eliciting that responsiveness and supplying a medium through which it could be displayed. Making such displays then becomes mandatory, normative, the basis by which we show our willingness to belong to society and play by its rules. It becomes the modality of empathy; we show we understand one another by consuming the same sorts of things and reading one another's consumption choices to a certain degree of fluency.

Self-consciousness begins to exponentially expand in the hall of mirrors we represent to one another: "I was looking back to see if you were looking back to see if I was looking back at you." We feel compelled to share every consumer gesture and practice (and technology develops to encourage and sate that impulse) because it all has "relevance" to who we are, but we become alienated from ourselves to an additional degree, watching ourselves mediate the watching ourselves consume, and on and on. And this amplification seems to builds up an unmanageable pressure, a sense that one can't catch up with oneself, with everything we can potentially be that's promised by all the things and ideas and information there is to consume and record ourselves consuming and reacting to.

What's needed now is permission not to consume in this sense -- permission to ignore what Baudrillard calls "the code" and proceed in the world in a state of self-forgetting.

Wednesday, August 10, 2011

Identity quanities (12 May 2010)

I am on vacation now, so I won't be writing much over the next few days. But while I was in the airport I read Rachel Kranton and George Akerlof's Identity Economics, which seemed as though it would have been right up my alley, considering how often I throw the word identity around. I was a little disappointed in the book because the authors had to labor hard to persuade their intended audience of things I already take for granted, namely that one's sense of identity affects the "individual utility function" -- i.e. shapes the choices one makes to try to garner satisfaction. More problematic, Akerlof and Kranton are forced to presume stable identities to program them into the formulas they use to deduce utility at any given moment for an individual. The basic idea (and this is a simplification done from memory) is that there is an ideal identity presumed, and the individual loses or gains utility according to whether their behavior conforms to the ideal. Cognitive dissonance, in other words, has economic ramifications.

I'm reluctant to buy into this completely, because I tend to see identity as an end product rather than a preconceived target; it's something we retrospectively assign to give coherence to our past behavior. I think the ideals are far more fluid than the authors' analysis allows for, and they tend to give short shrift to what is most interesting to me, the productive labor involved in producing and disseminating the "norms" they argue make up identity. When I write about immaterial labor, that is what I am trying mainly to get at -- how identity is now a circulating product and our self-fashioning is a kind of exploitable labor. The term "Identity economics" makes me think of that, the ways in which postmodern capitalism has made the self the ultimate manufactured product in the service-oriented economy -- which is not what Akerlof and Kranton have in mind. They seem to assume identity is stable and readily available to consciousness, but participation in the economic world can challenge what we believe about ourselves. This leads them to write things like: "He loses identity utility because of the gap between the effort he expends and what he ideally would like to do. His off-the-job behavior, in our terminology, is his way to 'restore the loss of identity utility.' "

They are careful to say that these are not entirely conscious calculations, but that still seems strongly implied -- that at some level an identity is preconceived, and behavior is compared and contrasted with it. Part of the issue I have with this is that it seems to reduced identity to one single dimension (often what seems a social stereotype), something that you have more or less of at a given moment. Obviously there are many dimensions working at the same time, many different ways we think about ourselves, many different norms intersecting and contradicting at any given moment, all affecting the utility that comes from who we think we are.

I think that generally, the degree to which identity becomes a rational calculation is also the degree to which we experience alienation, and apartness from a constructed self put on social display as a sort of product, a material manifestation of our social and cultural capital. The most "utility" may be in the ability to not see our identity as something instrumental, but as something natural, lived in, spontaneous. If that is so, then we are in a weird epistemological area where we make unconscious decisions to maximize our sense of our self being something uncalculated and natural. We need to be as unaware of identity economics as possible to derive any benefit from them.

But the cost-benefit analyses the authors provide of the ideological efforts to change people's norms and self-concepts are pretty interesting. Many of Kranton and Akerlof's examples hinge on ideas of being an insider or an outsider -- for example, what benefits an employer can gain by changing how their employees relate to their firm. They suggest that employers can possibly skimp on wages and recoup the costs of making an employee feel like they are an insider, because insider-ness is a kind of wage in itself. Manipulation of self-perceptions from the outside can thus be a cost-control measure; people can be paid more cheaply in affect than in money. That has ready application to the attention economy and internet-driven free labor, which is compensated not by wages but by some often vague sense of recognition. Businesses will be very eager to explore further ways to cut back on wages and justify economically ways to pay workers in affect instead. Akerlof and Kranton provide rudimentary tools to teach firms to trust in spending on ideological adjustments where once they were content to trust money as the ultimate motivator.



Saturday, August 6, 2011

Price discrimination watch (10 Feb 2010)

In the world of economic abstraction, prices are believed to find their "true" level, a real-time approximation of a good's actual value (if there is such a thing), by balancing supply and demand. This process of price discovery is a central pillar of free-market ideology; drawing on Hayek, free marketeers read into prices the decentralized distribution of information vital to the development of the economy. Prices let the people on the ground, knowingly or not, translate local conditions into incentives that can be communicated far and wide.

But lots of things jam up the signal, as when prices are "sticky" and can't quickly adjust to shifts in the sovereign consumer's whims. Then, the discussion shifts to price elasticity of demand -- what sort of range of prices are possible for a good. Demand is "inelastic" if it's not much affected by price.

That brings us closer to what retailers' practical experience with prices seems to be: Their primary concern is to figure out how to charge as much as they can from a given customer for a given good. That is, they need to divide their customers into segments that see different menus of prices -- as when Americans in Prague get one menu from restaurateurs, Czechs another. Customers don't like this. It immediately seems unfair once we realize such discrimination is happening. Everybody wants to have the illusion that they are getting the best deal, or if not that, at least the same deal everyone else is getting.

Online retail -- as this CNN piece and this WashPost article by Joseph Turow, both from 2005, detail -- seems like it would be the perfect place to perfect techniques of price discrimination. We create a concrete demographic profile through our trackable online behavior (the sites we visit, the sort of goods we click through to have a closer look at, who we know on Facebook, that sort of thing), which can be used to make assumptions about the prices we can afford to pay. And in the absence of printed price tags or other customers at the scene of exchange (the point of sale), the discriminatory price can be generated on the spot. Think of it as automated haggling that has taken place without your having to go through all the awkward trouble and conflict. You are adequately sized up and the appropriate line in the sand (for retailers, at any rate) is drawn.

Amazon famously and clumsily tried this back in 2000, but users quickly discovered it and protested. Paul Krugman wondered if it might be illegal under the Robinson-Patman Act. Still, there was little reason to expect the issue to disappear. It's too potent a weapon in the retail arsenal, and it seems such a ideal application for all the data now being gathered in our new Web 2.0-powered knowledge economy.

But apparently there is some intramural strife among businesses: This recent NYT article looks at the battle between online retailers and manufacturers over who gets to set prices. The conflict, the article reports, stems from a 2007 Supreme Court ruling that gave manufacturers more power to dictate how prices can be advertised -- in sheer defiance of Hayek. Manufacturers want to stop online retailers from using their goods as loss leaders, tarnishing the brand with cheapness and presumably undermining their ability to price discriminate elsewhere.
[Manufacturers] say the competitiveness of the Internet has unlocked a race to the bottom -- with everyone from large corporations to garage-based sellers ravenously discounting products, and even selling them at a loss, in an effort to capture market share and attention from search engines and comparison shopping sites. They also worry that their largest retail partners may be unwilling to match the online price cuts and could stop carrying their products altogether.
“If there isn’t that back-and-forth between manufacturer and retailer, it’s just a natural tendency to drive the price down to nothing,” said Wes Shepherd, chief of Channel Velocity, which sells software that allows companies to scour the Web looking for violations of pricing agreements.
I've been thinking about that less quote all day, and I still can't make any sense of it. The online retailers don't exist to give products away to thwart manufacturers. What I am missing here?

Thursday, August 4, 2011

The Ethics of Price Discrimination (5 Jan 2010)

Price discrimination is economics lingo for the retail practice charging customers different prices based on what they are willing to pay. Economists generally have no problems with this; that's just how fairness is defined in capitalist societies. Many consumers, I suspect, find the practice as unpleasant as I do, not merely because it can seem unfair to pay more than somebody else for the same good, but possibly because price discrimination undermines the cherished ideological tenet that there's a "true price" for goods, that useful goods are really worth something definite, and that fundamental use value is indexed to a good's cost. Instead, we learn that the price of many goods is indexed to our gullibility, to our negligence, or to retailers' ability to dupe us.

The price-discrimination game works best when pricing is not transparent -- visit a carpet warehouse, for instance, and try to find a price tag. Consumerism puts an end to the norm of haggling, however, since shopping in a consumer society must function as entertainment, and the shifty confrontations, the agonistic bartering with salespeople unnerves a lot of people. It makes us aware of all the asymmetries, makes us wary of bad deals, makes us aware that we must be willing to walk away with nothing oftentimes to not get ripped off. But consumerism requires a far more passive consumer who feels licensed to say yes to everything, to indulge in the pleasures of impulse purchasing, and take pleasure in the gratification of that impulse as mush as in the thing purchased, which more and more becomes a mere alibi for luxuriating in the retail world, where flattery and fantasy blend and become more salient to us. Shopping becomes an escape from conflict.

Hence, we have become more comfortable shopping with fixed prices, but retailers typically require prices to be less sticky in order to make a profit. They need to increase margins wherever and whenever they can. So there is constant tension between broadcasting a price to draw consumers in, and masking prices to charge consumers according to their class. Several strategies have evolved to address this: They can routinely reprice goods (easier now with automated systems), they can use various menu tricks to get consumers to choose more-expensive options (i.e., offering a ludicrously expensive option to make the second-most expensive option seem reasonable), they can offer loss leaders, they can bury additional fees in the fine print, they can sell the same crap with different labels to different customer classes, they can advertise a discount but not register it at checkout.

I encountered a blend of all this when I bought a TV this past weekend. I first went to P.C. Richard in College Point, in Eastern Queens, and tried to wrap my mind around the profusion of makes and models, all of which seemed largely the same, except for screen size and resolution. Various bells and whistles seemed tacked on to certain brands, but the flat-screen TV basically seems like a commodity to me -- any one would do, and I'd feel best about the one I selected once it was separated from all the others and began to become mine. Still I couldn't bring myself to simply buy the cheapest one on offer. The sale price for a particular model by a brand I have heard of was prominently displayed. I made a note of it, then went to Best Buy, where similar models were far more expensive (Best Buy is not always the best buy, apparently; they seemed to be banking on their mere reputation as a bargain retailer at this point.) So I went to the P.C. Richard in my neighborhood, and was baffled to find that the same sale model from the College Point store was priced $100 higher. I asked about it, and the salesman immediately matched the price I had seen at the other store. Then he tried to sell me a set of cables for $50. (It turned out I needed the cables to connect my laptop to the set, but you can get them on Amazon.com for under $20.) I was also buying a humidifier that was advertised on the showroom floor at $19, but when it was rung up, it defaulted to $25. I had to look over the salesman's shoulder at the register to notice this and have him correct it.

My point is that the TV purchasing process was riddled with opportunities for me to be lazy and get charged more as a result. My need for perpetual vigilance is no less than it would have been had I been required to haggle for it, only the illusion of stable prices was there to discourage me from worrying about anything. Should I yearn for a return to a haggling economy? Should I feel like I beat the system, or is that just more ideology reconciling me to the system? Should I point to the metaphoric scoreboard and celebrate the "bargain" I received at others' expense? Should I shop exclusively at flea markets and bazaars? Can any sort of regulatory intervention stop deceptive practices, or will retailers always find a new loophole or semi-deceitful practice to differentiate customers and dupe them according to their ignorance? Was it me? Was it you? Questions in a world of blue.

Corporations seem designed to maximize profit by exploiting every possible opportunity in a depersonalized economy. Any accommodation a big company happens to give a customer is the probably result of a probability calculation modeled on an analyst's spreadsheet. The message that corporations "care" about us is cooly manufactured in marketing departments as a sales tool and is blended with efforts to expedite price discrimination, to separate us into a million individuals cutting our own deals with that much less collective bargaining power.

If all competitors in an industry de facto collude to make customers miserable, so much the better -- just look at major U.S. airlines and cell-phone-service providers, or at the banks and credit-card companies. And look at health care, in which pricing transparency does little to contain costs. ("The evidence suggests the benefit of transparent pricing is limited, particularly when insurance companies are involved." Hmm -- I guess that's probably coincidental.)

Mike Konczal's recent post about businesses preying on the "cognitively weak" looks at some of the antisocial incentives of financial firms. Imagining himself an evil bank executive, he surmises he might be thinking along these lines, targeting old people whose brain function is fading:
Hitting up people with a lifetime of savings suffering from dementia is some real, serious money we can tap as a revenue source. Indeed, someone who forgets what they were doing between reading “Bullshit Surcharge: $40″ on their statement and calling the customer support number to complain is our ideal customer -- it’s the person who will be most profitable to us going forward.
To hard-liners free-marketeers, who tend to argue that companies are ethically bound to take advantage of their customers' foibles when they can get away with it, this is just price discrimination working its magic. The weak are punished, and the wise are thereby subsidized. It's financial innovation at its best. As Konczal explains, those who
are excited about how the current financial service industry excels because it punishes the ignorant and irresponsible: on what specific grounds could you not have to embrace, much less oppose, the Evil Rortybomb Plan above? I got a sense of proportionality in those arguments, that the most ignorant should have to pay the most. I don’t think anyone would argue against the idea that those suffering from dementia will be the most ignorant of their actual situations and most irresponsible in the sense that they aren’t capable of being responsible. The extra fees and traps they pay will in part also go to those enjoying extra bonuses and continued free financial services. It’s a win-win from this point of view, no? One must be consistent.

It doesn't take much for price discrimination to become plain old discrimination. Businesses want prices to differentiate the smart from the foolish to maximize the exploitative potential in society, whereas the rest of us want prices to indicate the social value of things so we can make more of what we need and stop making stuff we don't want. The result is a war over the meaning of prices, played out in the medium of information. Companies use disinformation and marketing to conceal beneficial or money-saving information from consumers, resulting in prices that can't be relied upon to mean much of anything.

Tuesday, August 2, 2011

Refuting the paradox of choice (24 Nov 2009)

The idea that consumer choice is freedom is perhaps the quintessential piece of consumerist ideology, so perhaps it is no surprise that economic pundits like Tim Harford, writing in the FT, would be eager to report the evidence against it.
The average of all these studies suggests that offering lots of extra choices seems to make no important difference either way. There seem to be circumstances where choice is counterproductive but, despite looking hard for them, we don’t yet know much about what they are. Overall, says Scheibehenne: “If you did one of these studies tomorrow, the most probable result would be no effect.” Perhaps choice is not as paradoxical as some psychologists have come to believe. One way or another, we seem to be able to cope with it.
Interesting that the paradox of choice is here presented as something the psychologists merely want to believe -- is this projection at work? Tyler Cowen, who declares that "the so-called paradox of choice is one of the most overrated and incorrectly cited results in the social sciences," links to Harford's story approvingly.

Whether you accept the refutation (or the original observation) seems a question of whether you trust the methodology of these sorts of studies. Mine is undermined by the fact that the studies themselves have yielded contradictory results: Harford reports, "Neither the original Lepper-Iyengar experiments nor the new study appears to be at fault: the results are just different and we don’t know why." I'm skeptical generally of efforts to replicate real-world psychology in artificial lab experiments. The arbitrariness of the tasks subjects are asked to participate in, and their abstraction from lived social reality, means they have turned off their self-consciousness to a degree and are behaving artificially, different from how they would act in a situation with true social implications and ongoing ramifications for their self-concept.

Determining the psychological impact of the number of choices is a proxy war for whether or not restrictions should be placed on markets in order to benefit consumers -- or to even encourage them to be less of consumers. I don't think any amount of research can ultimately arbitrate what is an ideological question.

Monday, August 1, 2011

What we deem rational is ideological (26 Sept 2009)

Will Wilkinson highlighted this paragraph about economism and behavioral economics from the FT's Economists Forum blog:
Behavioural economists have uncovered much evidence that market participants do not act like conventional economists would predict “rational individuals” to act. But, instead of jettisoning the bogus standard of rationality underlying those predictions, behavioral economists have clung to it. They interpret their empirical findings to mean that many market participants are irrational, prone to emotion, or ignore economic fundamentals for other reasons. Once these individuals dominate the “rational” participants, they push asset prices away from their “true” fundamental values.
This helped me clarify in my own mind the muddle I've been in about the ideology of "perfect markets" and how that ideal is possibly used ideologically. The economists quoted, Roman Frydman and Michael Goldberg, seem to lay it out pretty clearly. At the behest of most economists, neoclassical or behavioral or otherwise, we've fallen into the habit of elevating what economists normatively deem rational to the only true form of rationality, while ignoring what human behavior seems to suggest should be called "rational" -- that is, what ordinary people tend to do when confronted with various incentives or dilemmas.

And the motivating force behind all this is the effort to isolate "true" asset values -- a quest that has had an ignominious journey through the history of political economy as Justin Fox's The Myth of the Rational Market well documents. (For what it's worth, I reviewed that book here.) "True" asset values underpin the financial sector, motivate its investment strategies and analysis. The pursuit of them keeps money circulating, which keeps economies growing (on paper, anyway).

Determining what constitutes value has vexed economists from the beginning of the discipline. Marx's Capital is essentially a long meditation on the origin of "surplus value" as it arises out of exploited labor -- a notion that depends on his definition of value as socially necessary labor time. The labor theory of value has been discarded by economists in favor of marginalism, which to remain coherent requires a human subject that exhibits the sort of "rationality" behavioral economists are undermining.

The alternative (and I am not entirely sure it is preferable) would seem to be to render "rational" those "other reasons" that people have for behaving -- the decision-making approaches that are not driven by straightforward utility calculus. That means returning to a morality or an ethos that is not based on the market but on other norms of human interaction, ones that evolve not from impersonality (the market's liberating feature) but from integrative social ties. The danger is that this would reinstitute tribalism and ethnocentrism as the source of norms -- a return to feudalism or something worse instead of the development toward cosmopolitanism that arguably the globalization of market forces has ushered in.

Learned worthlessness (10 Sept 2009)

In the most recent NYT Magazine, the always interesting Jon Mooallem has an article about the self-storage industry in America. The need to store one's belongings in a 6-by-6-foot box miles away from where one lives is a pretty good indication that some sort of insanity has taken grip, but the very normality of that scenario shows just how entrenched consumerism has become. Once the need for storage was transitory -- a move or a divorce necessitated it. But in recent years, "the line between necessity and convenience -- between temporary life event and permanent lifestyle -- totally blurred," Mooallem explains. It has become convenient to live as though we are always in transition, that no set of belongings is stable or anywhere near complete or fulfilling. Obviously, this betokens the triumph of a consumerist ideology.

We accumulate things that we can't possibly use, but remember enough of what they once signified when we bought them -- the moments of excitement and fantasies being fulfilled that brought us -- that we can't throw them away. Much of what is stored is furniture, it turns out, in part because it's expensive enough not to seem disposable, but cheap enough to easily replace:
The marketing consultant Derek Naylor told me that people stockpile furniture while saving for bigger or second homes but then, in some cases, “they don’t want to clutter up their new home with all the things they have in storage.” So they buy new, nicer things and keep paying to store the old ones anyway. Clem Tang, a spokesman for Public Storage, explains: “You say, ‘I paid $1,000 for this table a couple of years ago. I’m not getting rid of it, or selling it for 10 bucks at a garage sale. That’s like throwing away $1,000.’ ” It’s not a surprising response in a society replacing things at such an accelerated rate — this inability to see our last table as suddenly worthless, even though we’ve just been out shopping for a new one as though it were.
This phenomenon suggests we are afflicted with a kind of schizophrenia about goods. As behavioral economists have long-known, we overvalue what we own (the "endowment effect"), yet at the same time we can't resist replacing it when we perceive a bargain. We appreciate taking advantage of a sale for its own sake, regardless of whether the opportunity conforms to any actual need, and regardless of whether we can accommodate the souvenir of our consumer triumph. We become trained to recognize potential value in everything and have a hard time recognizing when something has become worthless. It seems like the dark side of the congenital optimism that Americans are supposed to have; we can't give up on anything we once invested our faith in.

Though it has prompted the kind of turmoil that was once the storage industry's bread-and-butter, the current recession is also forcing people to give up spaces because they can't afford the rent. A measure of my own insanity: all I could think of while reading Mooallem's article was "Wow, I bet the dumpsters outside these storage units that people cant afford anymore are full of great stuff." But in this, I am just a reflection of that American optimism. Mooallem secured this great, telling quote:
“I really think there’s a spirit that things will turn around,” Jim Chiswell, a Virginia-based consultant to the industry, told me. “I believe that my children — and both my children are proving it already — they’re going to have more at the end of their lifetimes, and more success, than I’ve had. And so will their children. I don’t believe the destiny of this country as a beacon of freedom and hope is over. And I believe there will be more growth, and more people wanting to have things and collect things.”
What is hope if not the hope to have more?

Thursday, July 28, 2011

Clinging to our idiosyncrasy (15 July 2009)

Chris Dillow makes a good point about the similarities between Marxian and neoclassical assumptions about human behavior:
The basic premise of neoclassical economics, that people respond to incentives, echoes the Marxian notion that individuals are bearers of social relations. Both stress that an individual’s behaviour arises from the position he finds himself in - which influences the costs and benefits he perceives - more than from his character. Of course, both views can be pushed too far. But both remind us not to see human action as rising from mere idiosyncratic disposition.
Of course, the idea that our own action stems from our own uniquely idiosyncratic disposition is something we probably all have a tendency to assume. That seems to me the core of capitalist ideology, that we as individuals are essentially responsible for not only our actions but also for the surrounding circumstances that determine the possible range of actions. Dillow points out how this converges with behavior economists' findings:
there’s an important convergence between Marx and behavioural economics. Marxists believe that false consciousness can bamboozle workers into accepting capitalism. If we want to know how this happens, the cognitive biases and heuristics programme helps us. For example, the fundamental attribution error leads us over-estimate the extent to which the poor are to blame for their poverty, and to under-rate the importance of environmental or societal forces. The availability heuristic leads workers to blame immigrants for unemployment rather than less obvious forces. The just world phenomenon and system justification cause us to believe that capitalism must be fair. The status quo bias causes us to accept existing evils rather than risk new ones. And adaptive preferences cause the poor to resign themselves to their fates and want less, with the result that capitalist democracy sustains inequality.
If one's condition can be read as a statement of what is deserved, our empathetic instincts can be tempered if not squelched altogether. With empathy out of the way, the exchange process becomes more unfettered, and can grow to become the basis for more and more of social life, governing more of our interpersonal interactions. Our emotional responsiveness starts to register in our consciousness as irrational miscalculations of our interest, as maladaptive tendencies. In the name of preserving our individuality -- of hewing to the assumption that our idiosyncratic disposition determines our behavior -- we end up even more alienated, with a far more mechanistic view of our own behavior. The stubborn belief in our own special uniqueness is harnessed to a view of human behavior that allows for virtually no spontaneity whatsoever, that presumes our best self always acts out of the calculation of costs and benefits and explains away sacrifice or altruism as covertly self-serving.

Anyway, I know I have a tendency to cling to my sense of my own idiosyncrasy and take a peculiar pleasure in what I think it might prove about me, about my nonconformity, about my ability to resist manipulation, about my ability to transcend social norms and expectations and realize some higher originality. I'm into obscure music; I have a taste for difficult books; I don't watch popular TV shows. I won't go see the Transformers sequel. But I think that my presumptions of uniqueness are probably what guarantee my overall insignificance -- it keeps me motivated to remain deliberately apart, internally praising myself to the extent that other people don't get me, thereby guaranteeing that I will only be happy with myself to the degree that I influence no one. I wonder if this attitude truly is personal idiosyncrasy or the product of late capitalist ideology. Isolating individuals in their presumed specialness is an effective way of rendering them vulnerable to marketing appeals, to consumerism generally.

Thursday, July 21, 2011

Exploitation as business model (10 June 2009)

I was happy that credit-care-reform legislation passed, but admittedly, Arnold Kling, writing for the Atlantic's business site, seems to have a point here. He cites a number of examples from an old Fast Company article of consumers falling for really bad sales pitches from Capital One, and then concludes:
Many readers of the article were appalled by the consumer exploitation implicit in this data-driven marketing that seemed to impress the magazine. I can certainly understand wanting to protect consumers from such exploitation.
My concern, however, is that ultimately consumers with low intelligence and low conscientiousness are inevitably going to be exploited. If you remove one means of exploitation, another will arise.
With tighter credit card regulation, my guess is that credit card companies will stop exploiting some of the consumers with low intelligence and/or conscientiousness. Instead, these consumers will be exploited by other lenders or by merchants. But I doubt that legislation or regulation can stop the exploitation of such consumers altogether.
That's true; there will always be ill-informed, ignorant, negligent, or just plain stupid people who will constitute the prey of unscrupulous businesses. But that unfortunate situation shouldn't lead us to conclude that all businesses should be allowed to operate so that they increase the number of ignorant and negligent by making the most of asymmetrical information. It seems that the credit-card business is one in which competitors have no incentive to compete by providing lucid explanations to customers -- it's much like the cell-phone-service business, where there's de facto collusion to offer consumers only opaque and confusing plans and take advantage of inadvertent fees and contract-breaking hassles. So with credit cards, the government is stepping in not to try to legislate away stupidity or consumer laziness, but to try to create a business environment that discourages companies from making a business model out of making society more miserable.

Tuesday, July 19, 2011

Descent into autarky (5 March 2009)

This post from Willem Buiter presents an interesting way of thinking about markets, not as theoretically convenient constructs to suit economists' models, but as improbable, fraught and fragile institutions. They are not transparent and frictionless; they don't function automatically. Exchange, as such, uses up resources above and beyond what is exchanged, and these costs must be borne. Markets are never "free." His case is elegantly argued, and it's worth reading the whole thing. But here's his distillation:
The conclusion, boys and girls, should be that trade - voluntary exchange - is the exception rather than the rule and that markets are inherently and hopelessly incomplete. Live with it and start from that fact. The benchmark is no trade -- pre-Friday Robinson Crusoe autarky. For every good, service or financial instrument that plays a role in your ‘model of the world’, you should explain why a market for it exists - why it is traded at all.
Buiter's point is to discredit the efficient markets hypothesis -- the theory that markets aggregate information from investors and interested parties and prices permit efficient capital allocation. If markets themselves impose costs, these costs distort prices. And if prices reflect the future value, who knows how far that is? Buiter says such models presume "a friendly auctioneer at the end of time - a God-like father figure - who makes sure that nothing untoward happens with long-term price expectations or (in a complete markets model) with the present discounted value of terminal asset stocks or financial wealth." No such figure exists, and the models bear little resemblance to what goes on in actual economies.

As a result, economists henceforth, according to Buiter, will be forced to use "behavioural approaches relying on empirical studies on how market participants learn, form views about the future and change these views in response to changes in their environment, peer group effects etc." In other words, they will have to find tools to study how and why confidence ebbs and flows. They will become philologists of happy talk.

An aside -- I wonder if the notion of autarky offers a different way to conceive of what is happening in the recession as the economy contracts: The costs of maintaining markets -- the trust, credit, contract enforcement, disclosure and so on -- has suddenly become too high to justify the amount of exchange we had before. The sum of social needs hasn't changed; it has fallen below some critical threshold that makes them impossible to satisfy and thus superfluous. Perhaps we experience this as a retreat into self-sufficiency at a personal level -- not merely a return to thrift (though that is a part of it) but a move toward a fantasy of autarky in which we are able to maintain ourselves by our own efforts without the vagaries and exploitations and chaotic dangers of exchange and markets. But of course autarky is basically impossible; it's a dangerous delusion to think that we can exist without be part of a community -- markets included. It seems short-sighted to collapse the idea of community into markets, and make them one and the same for all intents and purposes; but it's equally wrong to be tempted by recession into thinking that community can exist without thriving markets, that we can somehow be better off when we are exchanging less.

Saturday, July 16, 2011

Buying an experience (27 Feb 2009)

It may turn out to be a question of semantics, but the idea of "purchasing experiences," as this PsyBlog item discusses, has always grated on me. It seems to conform the pleasures of living to the calculus of shopping, as if they were essentially the same, and the consumerist paradigm can be applied to all pleasures and desires. Everything is for sale, and everything has its price, if you only think of it in the right way. (Just ask Gary Becker.) Is this in fact true, that rational calculation underlies even our most spontaneous-seeming choices and we just choose to block it out of our consciousness from ideological convenience, or is hyper-rational-choice analysis of human behavior itself the ideological proposition? The PsyBlog post confirms what most research into the subject has found: that buying experiences is better than buying stuff, because the stuff sticks around and becomes lame and/or embarrassing, while the experiences become warm and fuzzy memories.
Experiences also beat possessions because they seem to:
* Improve with time as we forget about all the boring moments and just recall the highlights.
* Take on symbolic meanings, whereas those shoes are still just shoes.
* Be very resistant to unfavourable comparisons: a wonderful moment in a restaurant is personally yours and difficult to compare, but all too soon your shoes are likely to look dated in comparison with the new fashions.
That makes a lot of intuitive sense to me, but I just wish it weren't represented as a matter of what to buy. Can we simply have experiences rather than arranging to purchase them ahead of time?

I had a similar feeling about another consumer-choice related post. Jonah Lehrer, who has just written a book called How We Choose, recently posted about a consumer-research study built on the premise that we all operate with two distinct decisionmaking systems: "the slow rational, deliberate approach (System 1) or the fast, emotional, instinctive approach (System 2)." The study set out to determine which yielded better decisions, using the metric of "consumer consistency." I have read the rationale for this several times, and have failed to understand it as anything other than an inexplicable plug for Nikon cameras.
When faced with a choice task, consumers need to evaluate the overall utility of each of the alternatives they are facing and compare these utilities in order to make their final choice. Such a utility computation process is likely to vary from case to case based on the exact information consumers consider, the particular facts they retrieve from their memories, as well as the particular computations that they carry out; any of these process components is a potential source for decision inconsistency. For example, when shopping for a new Nikon digital camera, it is possible that consumers might change the aspects of the camera they focus on, the particular information they retrieve from memory, the relative importance weights they assign to the attributes, or the process of integrating these weights.
As researchers, we often treat such inconsistencies as ―noise‖ and use statistical inference tools that allow us to examine the data while mostly ignoring these fluctuations. Yet, such noise can convey important information about the ability of the decision maker to perform good decisions, and, in particular, it can reflect their ability to conceptualize their own preferences. In the current work we focus on such inconsistencies / noise in decision making as indicators of the ease in which consumers can formulate their preferences: we focus on the question of whether the cognitive or emotional decisions are more prone to this kind of error.
I'm not sure why inconsistency iis defined as "error" (Am I reading this right?) or why they assume that beneath the "noise" evoked in a given decisionmaking moment is a preference that is true and consistent over time for a particular individual. People's desires aren't that static. And the "noise" in the decisionmaking process is what makes us more than automatons; it makes us strange to ourselves, potentially, but that also means we discover new possibilities for who we are that we wouldn't otherwise reason our way into. I tend to think that our identity is not so continuous as the researchers' assumptions imply; that instead our identity tends to be conjured up by the demands of a given context -- to put it in lit-crit jargon, subjectivity is intertextual. It's relational. It's not a given, transcendent thing that then responds to situations and decisionmaking opportunities. The "noise" is everything.

If we are start making consistent decisions when forced to rely on our "emotional" decisionmaking system, as the study found, that suggests to me a failure of imagination, a retreat into safe choices in response to being overstimulated. The emotional brain is boring in its consistency, not "rational" as Lehrer suggests. Again, this could be semantics, could be a matter of how you define "rational," but it seems irrational to me to continue to choose the same thing over and over again. That seems sort of regressive, tending toward an infantile repetition compulsion. As much as I complain about gratuitous novelty-seeking, the idea that only consistent choices are rational seems even more absurd. (I am missing something about this study? I must be.) I sometimes feel as though I am coming around to a totally indefensible and irrational position that we shouldn't bother to study how we choose at all, since it can hardly be anything but a weapon in the hands of marketers to control what we choose, to force out the noise that makes us unique to ourselves and replace it with an official, monologic hum.

Friday, January 21, 2011

Financial narratives (14 Oct 2007)

I spent a lot of years studying literature, and though it frequently seemed like I was pursuing vanity degrees that would have no use in the "real world," I tried to put a brave face on it, telling myself and anyone who would listen that the study of the functioning of narratives culturally is integral to understanding the course of that society; that manipulation of those narratives can dictate the course of history; that the values individuals subscribe to can often be traced to narratives that have cultural currency. So it's gratifying to see an piece like this one in today's NYT, by Robert Shiller, of Irrational Exuberance fame, that confirms my rationalizations.
Consumer confidence indexes have not yet fallen as they did at the onset of the last two recessions. But confidence is a delicate psychological state, not easily quantified. It is related to the stories that people are talking about at the moment, narratives that put emotional color into otherwise dry economic statistics....
It is clear that salient, emotion-arousing narratives — those that capture the popular imagination and damage public confidence — are central to the etiology of recessions. As these stories gain currency, they impel people to curtail their spending, both in business and their personal lives.
Narratives that render the economic data comprehensible at an emotional level are clearly significant, but Shiller might have been more specific than to attribute those narratives to what "people" are saying. The business press mainly exists to fashion those narratives, and many of them take that responsibility to heart and continually cheerlead for the economy, trying to implant confidence-building stories about the current situation in the general public. Unlike daily market roundups, whose main subterfuge is to confidently present explanations of why markets moved in this direction or that (see the epic account of day trading maestro Victor Niederhoffer in this week's New Yorker for a look at how much more sophisticated this kind of interpretation can become), I'm thinking instead of BusinessWeek's "Business Outlook" section, the weekly roundup of the market's performance and its interpretation of new data for various indicators. This column is perpetually optimistic, finding the bright side to any grim piece of economic news and promising recovery or further growth or bigger profits or higher stock prices around every corner. It's a testament to how slippery data is, and how many different factors and figures can plausibly be evoked in providing a picture of the economy -- the sunshine gang at BusinessWeek can simply keep digging around until they find some indicators or some time frames to contextualize numbers and spin them positively. In reality, the economy has far too many moving parts to accurately describe the interaction of all of them; this opens the window to ideological interpretations of carefully selected partial accounts of the total situation. At stake in these narratives usually is the eventual level of state intervention in the economy, though other messages are often embedded: hard work pays; only greedy speculators disrupt the operation of markets; growth is benevolent and leaves no one behind; containing labor costs (i.e. wages) ultimately benefits everyone; freedom is best understood as purchasing power and entrepreneurial opportunity, etc.

If the business press is there to cheerlead for market forces' benevolence, and sustain investors' confidence, how does the recession narrative get started and catch on? It may be a matter of data too discouraging to ignore, though just a look at how the National Association of Realtors' economists spun away the problems in the housing market for so long is enough to make that premise seem questionable. They may not get started until after the fact, when they can be comfortably cast in the past tense, which may be why recessions are often not identified until well after the fact, as Shiller notes. And the business press ultimately has to represent conditions realistically in order that those who consume the information can profit. There's still money to be made in a recession, after all.

Recessions, according to Shiller, hinge in part on consumer confidence, and a force more significant than the business press in sustaining consumer confidence is the advertising industry, which is always touting the benefits of consumption in increasingly intrusive ways. And these messages become self-reinforcing; once we accept their basic truth, we may at some level yearn to see them reiterated, which makes us more attentive to ads and the fantasies they enact. A study of the appeal of fiction -- studying literature -- can perhaps be useful in understanding how ads design their own fantasies and make them efficacious. The vicarious pleasures experienced in consuming experiences supplied by the media may translate into a faith in shopping to provide similar joys, reinforcing the identities people want to construct for themselves. And our sense of confidence then rests in how effectively we can make those identities from things we buy and consume rather than other sorts of prospects -- that our identity stems primarily from being consumers makes sustaining consumer confidence something of a cinch.

Thursday, January 13, 2011

The default effect (7 June 2007)

When I taught college classes, nearly every paper my students turned in was in printed in the same font, Times New Roman. Of course, this was not because they were all serif lovers or because they were actually following the instructions supplied on my syllabus; it was because at the time, that was the default on Microsoft Word, which was the default word processor on the university's computers. (In fact, a paper in a different font was often enough to raise suspicions of plagiarism or, more likely, that the student was gaming the specifications to conceal a skimpy word count. Were I teaching today, I'd assign essays with a word count -- not a useless page count -- and I would ask that the count to be listed in the header with the student's name. Of course, few probably would follow these instructions anyway, so it wouldn't get me very far.)

Why this anecdote? To reinforce the main point I took away from Cass Susstein and Richard Thaler's paper on libertarian paternalism, namely that defaults are extremely sticky. Because of inertia, endowment effects, and the general sense that they are chosen benevolently, defaults tend to shape the behavior of many users, who can't be bothered to change them. Or alternatively, they enjoy the freedom from having to choose. Libertarians worry about this sort of thing, because it means, in many cases, that some bureaucrat in Redmond, Washington, has decided what your documents will look like, not you. In order to see what you really wanted your documents to look like, you'd have to be forced to choose a font every time you create a new document. Then we'd have a purer revealed preference.

But often, the point of defaults is to liberate people from choices, which is why a definition of freedom as choice is a bit problematic (unless you want to get all recursive and contemplate choosing not to choose). People rely on defaults when they are essentially indifferent -- when the effort required to choose isn't sufficiently rewarded by satisfaction in whatever choice is eventually made. No font other than Times New Roman will give enough pleasure to make up for the time wasted picking it (unless you are procrastinating, in which case the pointless font picking has a different sort of utility).

A problem may arise, however, in how your acceptance of the default will be understood. The danger is that it might be seen as an active preference for, in this case, the font itself, rather than for sheer indifference. If the default is widely recognized as a nonchoice, then there's little danger. But if it isn't, then you've lost the ability to be neutral on a subject -- to escape being judged for your aesthetics or style regarding typefaces. And the ability to be neutral, to be above judgment and avoid being judged, is becoming more and more valuable as more and more of everyday life is infected with style, and more and more purely aesthetic decisions are forced on us, say, in Target, with its aestheticized toilet brushes. When we can publicly and unequivocally opt for the default, we can escape this trap, preserve a little privacy for ourselves about our tastes, avoid displaying personality in something inane and conserve it for more important matters, the aspects of life we'd actively choose to invest ourselves in. This desire to be left alone is frustrated when no clear default is supplied, and the anxiety created by default-free scenarios may present a business opportunity so compelling that defaultlessness may itself become a default. (It opens up a whole service industry -- spawning advisers and counselors and image consultants and whatnot -- every time consumers are brought to confront unfamiliar choices that will say something about who they are to the world.) Defaults allow us to evade responsibility for choices we don't want responsibility for, even as commercial interests try to thrust that responsibility on us. It has become very hard to evade signaling opportunities, and at some point signaling fatigue must begin to set in and we get sick of having to project our identity at all times and in all things. There is luxury in feeling like it doesn't matter; that you can be the guy who doesn't worry about dressing up for work or having the coolest gadgets or the most up-to-date and exotic music collection or what have you. Freedom from the onus of signaling seems to promise a return to the freedom to actually experience things, as they are -- to enjoy Times New Roman as Times New Roman, even.

So defaults are potentially a force for good, helping stem fashion's creeping into everything. (Would that I had a default option for my wardrobe.) They are so good, it's easy to imagine it coming to pass that we'd have to customarily pay for the privilege of having defaults set for us, to pay for the permission not to choose. The proliferation of services that do your shopping for you are an intermediate stage toward this, I think. Outsourcing decision making might be the next wave of conspicuous consumption; there just needs to be clear ways to signal that you aren't behind the wheel of your lifestyle. At that point, we will have come full circle, and the desire to wish to avoid signaling will have itself become something that we signal to accrue status.

You don't have to be a postmodernist to realize that some sort of default setup is impossible to avoid. The terms are always already given in some way that shapes the resulting circumstances, so there's no transcendental signified of defaults -- nothing that can stand outside the realm of influence and connotation. There is no neutral way to present things so as to not already embed possibly coercive interpretations or default settings or implications or emphases. Susstein and Thaler are especially clear about this.

If the entitlement-granting rules seem invisible, and to be a simple way of protecting freedom of choice, it is because they appear so sensible and natural that they are not taken to be a legal allocation at all. But this is a mistake. What we add here is that when a default rule affects preferences and behavior, it is having the same effect as employer presumptions about savings plans. This effect is often significant.

So monetizing defaults isn't about withholding them so much as making them as undesirable as possible and charging to make them less undesirable. (Maybe Microsoft could make Comic Sans the default and charge for an explanation of how to change it.) With regard to Susstein and Thaler's concern with opt-in 401(k)'s, that's what employers are already doing. It's no accident that you have to enroll in savings programs in America -- the failure of many employees to do so on account of the default effect means they are leaving money on the table for the employer to keep. The defaults are arranged to benefit employers rather than employees, thus employers will resist any changes to this system and would likely lobby to preserve the status quo. As long as a default effect exists in consumers (and short of radical psychological breakthroughs, it will continue to exist), whoever controls the defaults, controls a source of revenue -- somewhere a long the line they can be rigged to someone's benefit. "Free market" boosters would like to see control of this revenue source remain in the hands of private business -- usually this argument is presented as protecting consumers from "paternalism," from someone making choices for them, which sounds bad only until you think about all the choices you are too lazy to make. (Incidentally, one of the main reasons I am still without a cell phone -- I'm too lazy to figure out the best deal and haven't convinced myself that the benefits outweigh the effort required to get over this hurdle. This may be the true definition of fogeydom. Lots of old-timers out there probably feel the same way about computers and such.) The government could also regulate defaults in more circumstances, beyond the "defaults" of our legal environment, with the idea being it has less reason not to put the public's interest first (by making 401(k)'s opt-out by law, say). But politicians are corruptible, and lousy defaults embedded at the government level may be harder to root out than others.

Exploiting default settings ultimately has to do with capitalizing on how we habitually process information, something advertisers and marketers have been exploiting for years. Since advertisers have staked a claim to these methods, and have honed them through testing them in the battle for market share, it may be harder for the government to use them in the public interest and pass them off as neutral (even though the are just subtle modes of manipulation) or benevolent. Instead, these manipulative techniques -- Vance Packard's "hidden persuaders" -- bear the disreputable stigma of being employed primarily to bilk us and have it be our fault, as it derives from our ill-considered thinking or our lazily accepting things as they are. Rather than adopt these methods itself, the government might more constructively work to educate individuals away from their inherent behavioral tendencies -- but this kind of social cognitive therapy might be the most paternalistic approach of all.

Chinese arithmetic (26 May 2007)

I have a naive faith in the transparency of numbers, trusting that they have no significance in themselves but allow us to see directly through to the importance of the amount they have measured. Numbers are truly floating signifiers, with no meaning until they are given a context, something to count. I naively believe that everyone else fundamentally feels the same way, in the abstract. This seems a cornerstone principle of what it means to be rational, and it draws the boundaries that mark off the territory of superstition. The incantations of economic data or earnings reports, with their endless litany of percentage gains and year-over-year comparisons and moving averages, seem in part an elaborate ritual to testify to the neutrality and clarity of numbers, so solemnly are they invoked to give meaning not to themselves but to large intractable phenomena in the economy. Few reading these stories about economic indicators or market performance reports care about the specific numbers, only the justification and the argument that the numbers allows to be built around them. They anchor the ongoing narrative of capitalist practice without usurping it. The efficient transparency of numbers facilitates the smooth series of exchanges and calibrations capitalism requires -- the price system works because no one presumably fetishizes the price itself but rather allows it to shift freely according to conditions.

That's why a story like this one from the Wall Street Journal a few days ago is so disturbing: it relates how numerology contributes to driving the Chinese stock markets.

Part superstition and part self-fulfilling prophecy, numerology is a basic trading strategy in China. The philosophy reflects the widespread belief in Chinese society that numbers contain clues to good fortune.
It is a little noticed force adding fuel to a roaring market in the world's fourth-biggest economy. The benchmark Shanghai Composite Index is up 56% this year and quadruple its level at mid-2005, a spike that is raising concerns about an investment bubble.
Investors' zeal to base decisions in numerology also helps explain why Beijing has been unable to temper enthusiasm in the stock market through conventional measures, like credit tightening last week.
To professional observers, the Chinese investing public's trust in the predictive power of numbers -- rather than fundamentals like business prospects or profit -- is one of many reminders of how buying on the Shanghai and Shenzhen stock exchanges looks like gambling.
Brokerages are set up like casinos. Investors drink tea, smoke and chat as they make trades on computers lined up like slot machines. Instead of dropping in coins, they swipe bank cards to pay for shares.
What makes this so flabbergasting, I think, is the enormous effort made in business discourse to make stock markets not seem like gambling. a great deal of emphasis is placed on providing information that justifies stock prices, connecting them to earnings in elaborate ways. But if the stock prices have more to do with the numbers themselves, then it's just roulette. Then it's predictable only in the sense of self-fulfilling prophecy mentioned above, which scuttles the idea that economic growth and the stock market are connected, that stock market bubbles are producing real progress or change in the society at large.

And then an article like this one, from last week's Economist seems scarier.
For the government the situation poses quite a challenge, particularly as anecdotal reports indicate that the stock buying craze is rampant both among the urban middle class and less well off sections of society like taxi drivers, pensioners and students. Should the market suffer a downturn these people could vent their fury, as investors did in the late 1990s when stock price crashes occurred, threatening political stability. The government will want to avoid such scenes in 2007 and 2008, as the Chinese Communist Party holds its five-yearly congress and the Olympics kick off in Beijing.

Inducing novice investors with possibly numerologically based investing strategies into an overheated market seems like a recipe for catastrophe.

Hence, from this week's Economist:
The Chinese consider four to be a very unlucky number (because in Mandarin it sounds like the word death). The number 4444 is thus presumably as bad as it gets. So suppose the Shanghai A-share index closes during the next week at 4,444 (it stood at 4,375 on May 23rd), which is quite possible given its 258% gain since the beginning of 2006; might that frighten investors enough to cause the share-price bubble to burst?

It just seems terrifying to me that stock markets can aggregate people's superstition and give it agency in the nonbelieving world at large.

Sunday, November 7, 2010

Historical gambling (30 January 2007)

Tyler Cowen links to this WashPost article about historical gambling, a potential future offering at Virginia's horse racing tracks.
Colonial Downs, which offers betting on horse races at 10 sites across Virginia, is pushing for changes in state law so that it can offer a new form of gambling, called historical racing, on which people wager on horse races that have already taken place.
Like Tyler's, apparently, my first response to this was "Wow. That sounds pretty stupid." But that's because I'm thinking that the pleasure of gambling on horseraces lies in the number-crunching and theorizing that handicapping consists of and then the drama of seeing your theories tested as the races happen live. And, actually, I'm thinking that gambling on races when you know the outcome already wouldn't be much fun because you wouldn't ever lose. Where's the joy in that?

That's when it becomes pertinent to consider historical gambling's other, far more accurate moniker: instant gaming. The machine picks an old race, and you instantly bet on a horse based on the odds at the original post time. I'm assuming the specific data (names of horses, date, track location, etc.) is scrambled so you can't fire it into a BlackBerry and get the results. Then you instantly see whether you made the right choice and get paid out according to those odds. It follows that same Pavlovian mechanism of instant gratification that scratcher tickets exploit, only with a throughbred racing theme and even more unpredictable expected returns. (Original odds are based on pari-mutuel wagers; applying them to a single bet seems almost arbitrary. The bet, the odds, and the outcome would seem to have an almost random association in instant gaming.) The charade of handicapping involved in this allows track owners to argue (with a straight face even) that instant gaming is a game of skill and not chance, like slot machines, which, incidentally, are prohibited at Virginia's tracks. It's clear why track owners want slots. They want to transition out of the dying business of thoroughbred racing (R.I.P. Barbero) into the always profitable business of straight gaming. These machines would likely be more profitable than straight slots because with the fluctuating odds, it seems like it would be difficult to mandate a reasonable payout percentage. (Slot machines in Las Vegas return around 90 cents for each dolllar played on them; Nevada requires that they be set up to pay out at least 75 percent of money put in.) Players would certainly have a tough time deducing their expected returns.

But reading about this made me rethink which form of gambling is more "dangerous" to the gambler, the game of pure chance, which inspires the average gambler to superstitious flights of fancy, or the game that promises the gambler a chance to use his skills, which probably leads to his overrating his responsibility for success (like the traders Nassim Nicholas Taleb likes to excoriate) and ignoring the amount of chance still involved. Is it better to handicap and lose, or guess randomly and lose? Handicapping horse races (and making sports book bets or playing poker) invites you to stake intellectual capital along with cash, and the former can be hard to recoup. At least losing at a slot machine won't bruise your ego along with emptying your wallet.

Thursday, August 5, 2010

Extended warranties (6 October 2006)

Buying extended warranties is foolish, as this Washington Post article clearly shows.
"The things make no rational sense," Harvard economist David Cutler said. "The implied probability that [a product] will break has to be substantially greater than the risk that you can't afford to fix it or replace it. If you're buying a $400 item, for the overwhelming number of consumers that level of spending is not a risk you need to insure under any circumstances."
Since extended warranties don't typically cover wear and tear damage -- the main reason consumer goods fail -- you would basically be buying insurance that covered an extremely unlikely event, that a product would suddenly become a lemon after the manufacturer's warranty lapsed. At the point an extended warranty kicks in, you'd generally be better off replacing whatever item it is with the up-to-date model rather than having a third-party repairman of the insurance company's choosing fix an outdated piece of electronics, probably at great inconvenience to you. You would do better putting that extended warranty money into a slot machine and setting aside whatever money resulted in a repair/replacement kitty.

Behavioral economists point to extended warranty purchases as an example of irrational risk sensitivity, but it seems to me like more a case of asymmetrical information. Spending makes consumers feel vulnerable, and retailers exploit that discomfort buy selling them an insurance product they know their customers don't really need. Customers buy a sense of well-being that evaporates, probably the minute they walk away from the register, away from the salesperson's nagging predictions of doom. (A variant on this is the pernicious practice of rental-car agents forcing unnecessary insurance on customers in an even more confusing retail and regulatory environment, typically conjuring up doomsday scenarios and implying legal ramifications for customers that are dubious to say the least.) You end up with the feeling that the company hopes the product it just sold you will break, to spite you for rejecting their warranty -- which is where it makes its money.

For what's startling, and what helps explain their popularity, is this fact, also highlighted by Tyler Cowen in this Marginal Revolution post: "Neither Circuit City nor Best Buy discloses how much of its bottom line comes from extended warranty sales. But analysts have estimated that at least 50 percent and in some lean years 100 percent of profits at the electronics retailers come from extended warranty sales." No wonder the salespeople are so pushy.

Retail stocks (23 September 2006)

Amateur stock picking is generally a bad idea, and every straight-talking guide to personal finance will tell you to invest in low-fee mutual funds that track certain indexes -- take the guesswork out of it, since changes in stock prices are generally a random walk that no analyst or fund manager could predict. The theory is that whatever information an investor could act on is already priced in to a security by the time you get your order for it in.

But this doesn't stop financial publications and financial service providers from pimping stocks and urging stock tips on readers. In One Market Under God Thomas Frank describes some of the hoopla about personal investing during the 1990s bubble, and what he calls "market populism." The idea was that anyone could use the stock market to get rich and that purchasing power rendered political power insignificant and made giant gaps between rich and poor immaterial. Part of the hype of the time regarded wise amateurs who could follow their gut and invest in companies whose products they believed in, as though it were as simple as having a good experience in a Home Depot (I know, a far-fetched example) and then phoning your broker the next day for 100 shares of it. Frank notes that one financial guru advised going to the mall and writing down the names of your favorite stores as a way to generate stock-investment ideas. Then you can have a personal stake in the success of the brands you prefer; you can cheer them on like sports teams, but have a legitimate reason for it.

I'm prone to do the opposite. Not that I'm a big-time stock picker, but whenever I read about recommended securities from the retail sector, I'm skeptical, and it has everything to do with my personal bias against brand-name shopping. I rationalize by thinking that it's foolish to bank on the overtapped American consumer's propensity to continue on a discretionary spending binge forever, but really it is that I don't want to believe that American Eagle Outfitters (AEOS) or Abercrombie and Fitch (ANF) are simply going to continue to grow; that duping teens with sexed-up advertisements can constitute a business strategy that Wall Street respects. I don't even want to take them seriously as businesses; I prefer to think of them as dark cultural forces that will be thwarted once everyone eventually wakes up and realizes how pointless brand-name clothes are. Investing in a company like Chico's (CHS) or Coach (COH) would not only be hypocritical, it would be against my utopian vision of the world, against what I want to believe about universal common sense. (Maybe this is precisely why I should be buying retail stocks. Never a bad idea to bet against utopias.) Perhaps the behavioral finance theorists have a term for this kind of bias, but I'm fully aware that it is irrational. But rejecting retail stocks because of a reactionary personal philosophy seems no less coherent than picking them because of the weather. And it turns out the weather is one of the most significant economic factor for retail stocks, perhaps more than fashionability or personal belief in the brand or a good feeling about a marketing strategy. Justin Lahart's column in Friday's WSJ noted the tendency for September's weather to determine a retail stock's fortunes:
September temperatures tend to vary a lot. And September is a crucial month for retailers. That means the weather plays an outsize role in the month's sales and can trump other economic factors, says Paul Walsh, a meteorologist at weather-analysis firm Planalytics, which advises retailers. September is when retailers, especially in the apparel business, are stocked with fall fare. Cool temperatures early in the season make it easier to sell sweaters and furry boots at full price. Last year, warm weather lasted across much of the U.S. until October, leading retailers to cut prices deeply in an attempt to clear inventory. The jolt of Hurricane Katrina also hurt many, meaning comparisons to last year are especially easy this month.
Obviously, if we follow the money, retailers must be scheming along these lines. Control the weather, control your portfolio. But it's amazing to me to think of all the sophisticated mathematical tools and spreadsheets and models and algorithms, and the vast sums of money at stake, and the myriad of different brokers and analysts who work everyday to try to harness the market, and in the end the kind of logic that is seen retrospectively to have affected the market can run along the lines of "Retail is thriving because September was sort of cold and more shoppers bought sweaters."

Asymmetric paternalism (12 September 2006)

I'm generally sympathetic to the arguments of behavioral economists, who want to broaden economics in efforts to account for humankind's irrationality, defined as its failure to always maximize utility and make choices that will lead to the most bountiful outcomes. But while reading John Cassidy's New Yorker story about neuroeconomics I found myself resisting the whole rational/irrational paradigm, which suddenly seemed impoverished. Suddenly the general humility of economics seemed much more appealing than the hubris of the neuroscientists who seem on the verge of suggesting we neutralize certain lobes of our cerebral cortex to make ourselves more "rationally" profit-seeking or to snort oxytocin (a hormone which induces loving feelings toward others) to make ourselves more trusting and therefore more economically efficient in commercial contexts. Rather than respect the different kinds of decision-making processes humans have adapted, it sounds as though some of these econo-scientists would like to modify people so they fit the traditional homo economicus models more comfortably. It seems better to amend the models or limit their applicability than to force humankind to exhibit the remorseless efficiency they presume. By the end of the article Cassidy is citing economist David Laibson pitching a dualistic model that mimics the Cartesian mind-body split: "The modified theories to which Laibson referred assume that people have two warring sides: the first deliberative and forward-looking, the second impulsive and myopic. Under certain circumstances, the impulsive side prevails, and people succumb to things like drug addiction, overeating, and taking wild gambles in the stock market." If this is so, I wonder whether these sides would have a consistent, predictable influence on the other, or whether they might not work simultaneously and independently. One can overeat while plotting an extremely rational stock portfolio. And a certain amount of pleasure derives from avoiding decisions altogether, from surrendering, from refusing to calculate, from inertia or expediency -- Cassidy himself notes he made decisions that were expedient when the machine measuring his brainwaves began to make him claustrophobic. At some point it becomes rational to be irrational; irrationality is not merely a consequence of emotions inappropriately obtruding.

My somewhat paranoid concerns about forced rationalism grew strongest when Cassidy discussed "asymmetric paternalism":
Reforming 401(k) plans is an example of “asymmetric paternalism,” a new political philosophy based on the idea of saving people from the vagaries of their limbic regions. Warning labels on tobacco and potentially harmful foods are similarly intended to keep subcortical structures in check. Neuroeconomists have suggested additional policies, including warning buyers of lottery tickets that their chances of winning are practically nonexistent and imposing mandatory “cooling off” periods before people make big-ticket purchases, such as cars and boats. “Asymmetric paternalism helps those whose rationality is bounded from making a costly mistake and harms more rational folks very little,” Camerer, Loewenstein, and three colleagues wrote in a 2003 issue of the University of Pennsylvania Law Review. “Such policies should appeal to everyone across the political spectrum.”
You don't have to work for the Cato Institute to find this dubious. None of these specific policy prescriptions seem problematic, but the logic behind them is worrisome. Some people can't be trusted to act in their own interest -- but who defines what that is, and by what criteria? Who gets to say what a rational person "should" do? Who gets to decide which reasons for acting are "bad" or "wrong"? Here, neuroeconomists fall back on the definition of profit/utility maximization as a defintion of rationality. You can see how this rationality, enforced by paternalistic measures, could easily become a prison, the bureaucratic nightmare Adorno evokes in his critique of Enlightenment positivism. It's like using an ad to advertise the idea that paying attention to ads is harmful -- these kind of measures are designed to bring people "back" to their senses while reinforcing the idea that the there's no need to return since common sense and rationality are already being retrofitted into the options society presents them. The underlying assumption of asymmetric paternalism is that people are sheep, with no strong reasons for doing what they do, so they may as well be encouraged/forced to do what can be deemed most beneficial socially.

Cassidy quotes Laibson on the nature of this paternalism “The practical implications of the experiment come from obtaining a better understanding of the human taste for instant gratification,” Laibson said. “If we can understand that, we will be in a much better position to design policies that mitigate what can be self-defeating behavior.” I'm not going to make the argument that "self-defeating" is a contradiction in terms, as some economists sometimes seem to imply (if every choice by definition reveals a preference, then how can you choose what doesn't suit your own wishes without being coerced?) -- but who's to say what is "self-defeating" and in what circumstances? What sort of policy could cover all the exceptions? Time has a different value to different people in different circumstances -- influencing that value is what exploiting convenience is all about. The taste for instant gratification may be impulsive or may be a matter of what an individual considers timely -- it seems foolish to, say, buy an BluRay DVD player right now, but if you derive all sorts of satisfaction from being the first on the block to have one, you can't afford to have your gratification delayed. And it seems dumb to buy lottery tickets, but they are licenses for invaluable fantasy for some. So what may seem like poor decision-making to us could just be part of the plan. There's no sure way of accounting for other people's notions of utility.

If we are going to institute some of these measures, I'd rather they be sold not as something for my own good but something that is for the social good -- you will be defaulted to save in a 401 (k) because it will help prevent sociey from having to support you when you are old and destitute, or spare society the sight of your suffering. You will be discouraged from smoking, because your smoke poisons others and because society doesn't want to bear the burden of your medical costs. And so on. Leave people with the illusions that they know what is best for themselves and encourage the notion that everyone will be making sacrifices for the common good -- this seems better, ideologically speaking, than having the state work to maximize outcomes for individuals.

A side note: I'm a bit puzzled by the ultimatum game:
A good way to illustrate Cohen’s point is to imagine that you and a stranger are sitting on a park bench, when an economist approaches and offers both of you ten dollars. He asks the stranger to suggest how the ten dollars should be divided, and he gives you the right to approve or reject the division. If you accept the stranger’s proposal, the money will be divided between you accordingly; if you refuse it, neither of you gets anything.
How would you react to this situation, which economists refer to as an “ultimatum game,” because one player effectively gives the other an ultimatum? Game theorists say that you should accept any positive offer you receive, even one as low as a dollar, or you will end up with nothing. But most people reject offers of less than three dollars, and some turn down anything less than five dollars.
It seems to me that this game sets up a reference group of you and the other person that makes invidious comparison inevitable. Thus as the other person gets richer, you get poorer by comparison. It seems perfectly rational from that point of view to demand an even split, or have neither of you gain anything. Only by imagining a fictitious reference group -- i.e. not the person you are in the game with but people you are theoretically comparable with -- can you make the game theorists' rational choice. Rationality ends up depending on a rich, healthy imagination.