Mike Konczal wrote a critique of Mitt Romney's plan to convert unemployment insurance into something that more closely resembles a 401(k) -- the idea being that everyone should have the "freedom" to manage their own unemployment insurance in accordance to how they assess their own personal risk. Of course, as Konczal notes, it would also help erode the idea that government exists in part to provide a general social safety net and aggregate social risk efficiently in order to defray it -- conservative thinking is that individuals should bear all of life's risks themselves. The private-account gambit is one of the quintessential neoliberal policy moves, ostensibly ending government "interference" in the labor market by making sure workers are as insecure as possible.
Conservatives favor these sort of arrangements, Konczal suggests, because "people will look at private savings accounts and think that the government isn’t doing anything." This furthers their arguments for dismantling the services government provides to all straightforwardly while protecting the "submerged" benefits that the rich, savvy, and well connected are better positioned to exploit through knowing how to take advantage of tax loopholes -- a phenomenon explored in this American Prospect essay by Suzanne Mettler and elaborated here by Henry Farrell. It reminds me also of arguments Dean Baker makes in The Conservative Nanny State (pdf) and Jamie Galbraith makes in The Predator State (excerpt here). They argue that the rich have figured out the government exists to be looted, and the key political question is how to perpetuate and mask that process at the same time. One good way of doing that is through expanding the "submerged state" and generating policies that confront people with decisions that they are not sophisticated enough to make on their own and make them into more vulnerable prey. Private retirement accounts, for example, are far more lootable by financial advisors, etc., than the guaranteed-benefit pensions they have supplanted. (In the mean time, conservatives have succeeded in convincing many of us that our Social Security benefits are insecure and that the U.S. government is perfectly capable of simply reneging on its debts.)
If neoliberal reform is intended to let markets govern more and more of our everyday lives, then it's not surprising that the tactics of exploitation rife in market exchange -- things akin to price discrimination and exploiting asymmetrical information and implementing hidden fees through confusing contracts -- will also come into play. Markets are often good at incentivizing complexity, which produces the ignorance they need as an alibi. Cell-phone service providers, mortgage originators, used-car salespeople, the travel industry (hotels, rental cars, airlines) are the classic exemplars of these tactics, but they are endemic in a consumer economy whose firms frequently depend on stratifying customers to maximize profits. But when applied to the distribution of tax obligations and government services, these tactics become the legitimized means for reproducing and expanding existing disparities between classes while making it seem as though that is the fault of the disadvantaged -- it's their fault for being ignorant, too "lazy" to master the tax code, or to drive hard bargains of their own.
In his response to Konzal's post, Corey Robin notes the needless complexity of the arrangement, "all the time and energy we as individuals now have to devote to doing the things that the state used to do for us" thanks to neoliberalism and privatization. Robin adds, "The right thinks of that as freedom—they hear the words 'state is doing for you' and they imagine patients etherized on a table—but I think of it as tyranny." Peter Frase concurs in this response, pointing out that "in a highly unequal society, greater complexity in the institutions of the state will generally favor the interests of the rich." He concludes that "the right has gotten a lot of mileage of out of the demand for small government. Maybe it’s time for the left to make a bigger deal out of simple government." That seems like a pretty good idea to me, though the logic may lead to supporting policies like the flat tax. Also, our tendency to overrate our pleasure in making decisions for ourselves (and underestimate the problem of ego depletion) makes us vulnerable to believing that it is generally simpler for us to do things ourselves and assume total responsibility for them. The celebration of individualism in American society is such that there is already ideological momentum behind the idea that self-managing everything is somehow always convenient, even in the face of frustrating experiences to the contrary, which are seen as isolated exceptions.
Friday, August 19, 2011
Simple government (17 Aug 2011)
Labels:
financial crisis,
financial literacy,
neoliberalism,
politics,
taxes,
unemployment
Pret à mourir (12 Aug 2011)
I borrowed the title for this post from my friend Anton of Generation Bubble, who forwarded me a link to this NYT article by Stephanie Clifford about Pret à Manger, sort of the Target of sandwich shops, assuming Subway is the Wal-Mart. If you want to see a horrific application of all the principles of immaterial and affective labor, Virnoesque virtuosity, lateral surveillance, obligatory reflexivity, emotional management, gamification and so on, you need look no further.
All the joys of tournament labor markets like those that exist in academia, with none of the "life of the mind" rationalizations. And instead of solidarity against management, each worker becomes the face of management, another Stasi spy for the happy police.
But that is not nearly enough surveillance to allow Pret's management to discriminate among workers:
DARE sessions in school taught me that peer pressure was bad, but I suppose peer pressure, in this context, is good. It is the vaunted power of worker collaboration and cooperation turned inside out and made into a coercive management tool. One's very ability to get along with others is alienated and quantified,amde into somethign you would only do for money rather than from basic human solidarity. Pret rejects the sort of human sociality that might thrive outside of capital, that is possible in environments where making a profit by selling commodified service experiences isn't the overriding goal. Instead Pret chooses to incentivize human feeling and turn the point of exchange into an explicit, quantified moment of affective labor while turning worker cooperation into a reified shadow of itself. That policy is carried out all down the line, apparently, with no sociality left unincentivized and thus unexploited:
Wow cards, I suppose, are the Scooby snacks of the service industry. It's always nice to be recognized, but there seems to be something backhanded about making even that a lottery scenario. And in the end, it's just Pavlovian manipulation, not genuine recognition of the worker as a human. The incentivizing of feeling leaves no space for the employees to be recognized in and of themselves. Everything about them as feeling creatures has been subsumed by the wage relation. That's what is so creepy about going into a Pret -- you know they are being forced to be nice to you and are being carefully watched by other fake-nice bosses and informers. It feels like those moments in movies about people in a mental asylum, where the patients try to maintain a facade of controlled politeness in hopes of demonstrating their newfound sanity. This sounds sort of insane to me, anyway:
Can a boss really force you to display your "true character" without driving you into an insane spiral of endlessly recursive reflexivity? And is one's "true character" nothing more than picking random lottery-winner customers to hand a cake to? Are human interactions so conditioned by the imperative of exchange that giving and getting something for nothing is the best way to simulate genuineness, or sincere benevolence? Perhaps the looting in London was just a big expression of love.
The article should put to rest any ideas that the implementation of such concepts as gamification and the general intellect are inherently benevolent or subversive. Instead, they can be deployed by management to create a kind of affective Taylorism, where emotional experiences are assembled under hurry-up conditions and energetically concealed duress. Unless you believe that it's more fun to be forced to pretend to be having fun while working a deli counter -- maybe the findings that people who are forced to smile report being happier apply here also. Clifford notes that Pret's "annual work force turnover rate is about 60 percent — low for the fast-food industry, where the rate is normally 300 to 400 percent." Stockholm Syndrome is a powerful management tool.
The emotional labor being extracted from Pret employees exemplifies the way tight labor markets give employers the chance to cement expectations of a more pliant disposition from workers. The new normal is a grotesque sycophancy sugarcoated as a fun, cheerful workplace where "teamwork" rules. In an email, Anton says Pret's approach elicits an "unprecedented self-relation -- instrumentalization of mood and affect as a way of producing surplus value. It can only end in a psychotic break." I'm inclined to agree.
How does any company encourage teamwork? At Pret a Manger, executives say, the answer is to hire, pay and promote based on — believe it or not — qualities like cheerfulness.
There is a certain “Survivor” element to all of this. New hires are sent to a Pret a Manger shop for a six-hour day, and then the employees there vote whether to keep them or not. Ninety percent of prospects get a thumbs-up. Those who are voted out are sent home with £35 ($57), no hard feelings.
The crucial factor is gaining support from existing employees. Those workers have skin in the game: bonuses are awarded based on the performance of an entire team, not individuals. Pret workers know that a bad hire could cost them money.
All the joys of tournament labor markets like those that exist in academia, with none of the "life of the mind" rationalizations. And instead of solidarity against management, each worker becomes the face of management, another Stasi spy for the happy police.
But that is not nearly enough surveillance to allow Pret's management to discriminate among workers:
Pret also sends “mystery shoppers” to every shop each week. Those shoppers give employee-specific critiques. (”Bill didn’t smile at the till,” for instance.) If a mystery shopper scores a shop as “outstanding” — 86 percent of stores usually qualify — all of the employees get a £1-per-hour bonus, based on a week’s pay, so full-timers get around $73. “There’s a lot of peer pressure,” said Andrea Wareham, the human resources director at Pret.
DARE sessions in school taught me that peer pressure was bad, but I suppose peer pressure, in this context, is good. It is the vaunted power of worker collaboration and cooperation turned inside out and made into a coercive management tool. One's very ability to get along with others is alienated and quantified,amde into somethign you would only do for money rather than from basic human solidarity. Pret rejects the sort of human sociality that might thrive outside of capital, that is possible in environments where making a profit by selling commodified service experiences isn't the overriding goal. Instead Pret chooses to incentivize human feeling and turn the point of exchange into an explicit, quantified moment of affective labor while turning worker cooperation into a reified shadow of itself. That policy is carried out all down the line, apparently, with no sociality left unincentivized and thus unexploited:
Pret reinforces the teamwork concept in other ways. When employees are promoted or pass training milestones, they receive at least £50 in vouchers, a payment that Pret calls a “shooting star.” But instead of keeping the bonus, the employees must give the money to colleagues, people who have helped them along the way.
There are other rewards. Every quarter, the top 10 percent of stores, as ranked by mystery-shopper scores, receive about £30 per employee for a party. The top executives at Pret get 60 “Wow” cards, with scratch-off rewards like £10 or an iPod, to hand out each year to employees who strike them as particularly good. Pret has all-staff parties twice a year, and managers get a monthly budget of £100 or so to spend on drinks or outings for their workers.
“Rewards, through bonuses or ‘outstanding’ cards, affect behavior,” Ms. Wareham says.
Wow cards, I suppose, are the Scooby snacks of the service industry. It's always nice to be recognized, but there seems to be something backhanded about making even that a lottery scenario. And in the end, it's just Pavlovian manipulation, not genuine recognition of the worker as a human. The incentivizing of feeling leaves no space for the employees to be recognized in and of themselves. Everything about them as feeling creatures has been subsumed by the wage relation. That's what is so creepy about going into a Pret -- you know they are being forced to be nice to you and are being carefully watched by other fake-nice bosses and informers. It feels like those moments in movies about people in a mental asylum, where the patients try to maintain a facade of controlled politeness in hopes of demonstrating their newfound sanity. This sounds sort of insane to me, anyway:
Every new employee gets a thick binder of instructions. It states, for example, that employees should be “bustling around and being active” on the floor, not “standing around looking bored.” It encourages them to occasionally hand out free coffee or cakes to regulars, and not “hide your true character” with customers.
Can a boss really force you to display your "true character" without driving you into an insane spiral of endlessly recursive reflexivity? And is one's "true character" nothing more than picking random lottery-winner customers to hand a cake to? Are human interactions so conditioned by the imperative of exchange that giving and getting something for nothing is the best way to simulate genuineness, or sincere benevolence? Perhaps the looting in London was just a big expression of love.
The article should put to rest any ideas that the implementation of such concepts as gamification and the general intellect are inherently benevolent or subversive. Instead, they can be deployed by management to create a kind of affective Taylorism, where emotional experiences are assembled under hurry-up conditions and energetically concealed duress. Unless you believe that it's more fun to be forced to pretend to be having fun while working a deli counter -- maybe the findings that people who are forced to smile report being happier apply here also. Clifford notes that Pret's "annual work force turnover rate is about 60 percent — low for the fast-food industry, where the rate is normally 300 to 400 percent." Stockholm Syndrome is a powerful management tool.
The emotional labor being extracted from Pret employees exemplifies the way tight labor markets give employers the chance to cement expectations of a more pliant disposition from workers. The new normal is a grotesque sycophancy sugarcoated as a fun, cheerful workplace where "teamwork" rules. In an email, Anton says Pret's approach elicits an "unprecedented self-relation -- instrumentalization of mood and affect as a way of producing surplus value. It can only end in a psychotic break." I'm inclined to agree.
Rioting Nonconsumers (10 Aug 2010)
Is rioting an expression of envy, or something more political, or something that is ultimately inexplicable? From Zygmunt Bauman's response to the London riots:
Here Bauman is drawing on ideas he's developed over his series of books from the past decade: Liquid Modernity, Consuming Life, Does Ethics Have a Chance in a World of Consumers? (which I wrote about here). Modern identity is fluid, unmoored, and the consumer society has hijacked it to serve its ends, making our sense of self and the meaning of our life contingent on consumer desire; cravings for novelty; the ability to want, get and discard the "right" things, and so on.
In Consuming Life he argues that "if one agrees with Carl Schmitt’s proposition that the ultimate, defining prerogative of sovereign is the right to exempt, then one must accept that the true carrier of sovereign power in the society of consumers is the commodity market; it is there, at the meeting place of sellers and buyers, that selecting and setting apart the damned from the saved, insiders from outsiders, the included from the excluded (or, more to the point, right-and-proper consumers from flawed ones) is daily performed." Thus it should not be surprising that feelings of social exclusion play themselves out as attacks on shops.
But why now? Why riots in London all of a sudden, if this sort of exclusion has been persistently present? Is it just random when one of the land mines Bauman sees littering consumer society gets stepped on? Chris Dillow asks this question and attributes it to information cascades, which allow would-be looters to confirm for themselves that their behavior is sufficiently correlated with others that they will collectively get away with it. But he adds, as "illuminating as the theory of information cascades can be, there is a problem with it. We cannot forecast when such cascades will emerge. We can only identify them in hindsight. They allow us to explain behaviour, but not predict it." This is kind of reminiscent of Badiou's theory of the event, or at least what I understand of it. It can't be predicted because it is a total disruption of how we understand the procession of ordinary occurrences. Would appreciate a link to anyone interpreting the London rioting in Badiou's terms.
Dillow's questions also reminded me of Andrew Potter's response to the Vancouver hockey rioting:
Potter thought that social media might simplify the coordination problem the way a big hockey game (or an egregious example of police brutality) can, but might also provide enough surveillance to discourage it. Facial recognition technology is being used on looters in London, as it was after the Vancouver incident. Social media potentially adds to the number of cameras pointed at everyone to protect the consumer citadels.
Anyway, I think one of the most interesting things Bauman writes about is his interpretation of Levinas's theory of the infinite responsibility to the other. As infinite responsibility is a pretty serious burden for anyone, one of society's purposes, Bauman argues, is to limit our sense of obligation, to give us rationalizations for watching out mainly for ourselves or some limited subset of society, or to give us a way of ranking our responsibilities to others. In a consumer society, the celebration of individualism and our “right” to convenience and novelties work to convince us that we have a duty to free ourselves from having to consider other people’s needs—and the market works to supply us the tools to avoid impinging human contact. It sells us ways to avoid having to deal with other people and the "hassle" they represent. If we can't afford those or if we become sick of strictly being that hassle to others, then looting works just as well as an expression of our own right to not give a shit about others. If purchasing power represents freedom, so then can looting.
We are all consumers now, consumers first and foremost, consumers by right and by duty... It is the level of our shopping activity and the ease with which we dispose of one object of consumption in order to replace it with a “new and improved” one which serves us as the prime measure of our social standing and the score in the life-success competition. To all problems we encounter on the road away from trouble and towards satisfaction we seek solutions in shops. From cradle to coffin we are trained and drilled to treat shops as pharmacies filled with drugs to cure or at least mitigate all illnesses and afflictions of our lives and lives in common. Shops and shopping acquire thereby a fully and truly eschatological dimension. Buying on impulse and getting rid of possessions no longer sufficiently attractive in order to put more attractive ones in their place are our most enthusing emotions. The fullness of consumer enjoyment means fullness of life....
For defective consumers, those contemporary have-nots, non-shopping is the jarring and festering stigma of a life unfulfilled – and of own nonentity and good-for-nothingness. Not just the absence of pleasure: absence of human dignity. Of life meaning. Ultimately, of humanity and any other ground for self-respect and respect of the others around.
Supermarkets may be temples of worship for the members of the congregation. For the anathemised, found wanting and banished by the Church of Consumers, they are the outposts of the enemy erected on the land of their exile. Those heavily guarded ramparts bar access to the goods which protect others from a similar fate: as George W. Bush would have to agree, they bar return (and for the youngsters who never yet sat on a pew, the access) to “normality”. Steel gratings and blinds, CCTV cameras, security guards at the entry and hidden inside only add to the atmosphere of a battlefield and on-going hostilities. Those armed and closely watched citadels of enemy-in-our-midst serve as a day in, day out reminder of the natives’ misery, low worth, humiliation. Defiant in their haughty and arrogant inaccessibility, they seem to shout: I dare you! But dare you what?
Here Bauman is drawing on ideas he's developed over his series of books from the past decade: Liquid Modernity, Consuming Life, Does Ethics Have a Chance in a World of Consumers? (which I wrote about here). Modern identity is fluid, unmoored, and the consumer society has hijacked it to serve its ends, making our sense of self and the meaning of our life contingent on consumer desire; cravings for novelty; the ability to want, get and discard the "right" things, and so on.
In Consuming Life he argues that "if one agrees with Carl Schmitt’s proposition that the ultimate, defining prerogative of sovereign is the right to exempt, then one must accept that the true carrier of sovereign power in the society of consumers is the commodity market; it is there, at the meeting place of sellers and buyers, that selecting and setting apart the damned from the saved, insiders from outsiders, the included from the excluded (or, more to the point, right-and-proper consumers from flawed ones) is daily performed." Thus it should not be surprising that feelings of social exclusion play themselves out as attacks on shops.
But why now? Why riots in London all of a sudden, if this sort of exclusion has been persistently present? Is it just random when one of the land mines Bauman sees littering consumer society gets stepped on? Chris Dillow asks this question and attributes it to information cascades, which allow would-be looters to confirm for themselves that their behavior is sufficiently correlated with others that they will collectively get away with it. But he adds, as "illuminating as the theory of information cascades can be, there is a problem with it. We cannot forecast when such cascades will emerge. We can only identify them in hindsight. They allow us to explain behaviour, but not predict it." This is kind of reminiscent of Badiou's theory of the event, or at least what I understand of it. It can't be predicted because it is a total disruption of how we understand the procession of ordinary occurrences. Would appreciate a link to anyone interpreting the London rioting in Badiou's terms.
Dillow's questions also reminded me of Andrew Potter's response to the Vancouver hockey rioting:
The point is that if you can get enough people to riot, then you all get away with it. The trick, then, is getting enough people willing to do it, in the same place and at the same time, to create a tipping point effect. And so when it comes to starting a riot, what the participants are faced with is essentially a coordination problem.
Potter thought that social media might simplify the coordination problem the way a big hockey game (or an egregious example of police brutality) can, but might also provide enough surveillance to discourage it. Facial recognition technology is being used on looters in London, as it was after the Vancouver incident. Social media potentially adds to the number of cameras pointed at everyone to protect the consumer citadels.
Anyway, I think one of the most interesting things Bauman writes about is his interpretation of Levinas's theory of the infinite responsibility to the other. As infinite responsibility is a pretty serious burden for anyone, one of society's purposes, Bauman argues, is to limit our sense of obligation, to give us rationalizations for watching out mainly for ourselves or some limited subset of society, or to give us a way of ranking our responsibilities to others. In a consumer society, the celebration of individualism and our “right” to convenience and novelties work to convince us that we have a duty to free ourselves from having to consider other people’s needs—and the market works to supply us the tools to avoid impinging human contact. It sells us ways to avoid having to deal with other people and the "hassle" they represent. If we can't afford those or if we become sick of strictly being that hassle to others, then looting works just as well as an expression of our own right to not give a shit about others. If purchasing power represents freedom, so then can looting.
Labels:
choice architecture,
convenience,
individualism,
politics,
surveillance
Convenience of Streaming Services (5 Aug 2011)
An article at the AV Club by Sam Adams looks at the implications of Netflix's streaming service and the growing popularity of Spotify, a music-streaming company. He begins with an observation that seems unassailable to me -- "Convenience and choice are the watchwords of the digital era, in which content must be instantly accessible and as quickly digested, lest consumers flit off to some more welcoming destination" -- but I was confused by the analysis that follows, which didn't really explain why consumers are so susceptible to novelty and what he calls the "convenience trap," the willingness to consume what's available as opposed to what is presumably good for you. Adams fears we may be "unconsciously downgrading anything that isn’t so ready at hand."
But what does that mean? Why does everything have to be graded? And does an unconscious grade have any meaning? If you can't bother to make the effort to make your tastes conscious, then what difference does it make to you what you watch? And why should anyone else care? Adams is concerned that the great works may be lost to history if streaming services don't assimilate them to their streaming libraries: "Spotify’s great, unless you want to listen to anything Hüsker Dü recorded before its major-label debut. Would you trade New Day Rising for the Black Eyed Peas catalogue?" This doesn't strike me as a serious question. If you badly want to hear New Day Rising, try this. If you care about music, you probably won't let Spotify dictate what you can or can't hear, and digital reproduction has made it fairly likely that digital copies of everything will survive and proliferate. (Our real archival concern should be with the survival of analog artifacts that have yet to be digitized -- even though digitization may lead to a not entirely representative version of a work surviving.) The people who have a lot invested in their entertainment choices will supplement streaming services with ready alternatives. The people who don't diversify their supply basically don't really care, and why should they? Because certain art is good for them, and they should be made to consume it through clever institutionalized market nudges?
Adams's implicit concern seems to be that the tasteless masses will be left to languish in their cultural ignorance because the streaming services they thoughtlessly adopt don't force more redeeming content on them. And he also seems to think that if you are not cleaver enough to make redemptive consumer quests for the great works, you will be too dim or disinterested to understand them: "If you’re not inclined to put forth the effort to get yourself in close proximity to a given artwork, will you be willing to expend the mental energy necessary to understand it?" Apparently if one lives next door to the Prado, Goya's works there become more or less indistinguishable for you from Hagar the Horrible comics.
Working hard to gain access to a work has nothing intrinsic to do with being willing or able to interpret it. Adams offers an S&M take on art appreciation, that art should dominate and master us while we subserviently mold ourselves to its masterful lessons: "the viewer—not, please, the consumer—is fundamentally subservient to a work of art, in which it is our responsibility, and often our pleasure, to come to the work rather than expecting it to come to us. After all, shouldn’t art be inconvenient, if not in the sense of being difficult to access, then because it forces us out of our comfort zones, requiring us to reckon with its way of understanding the world?" I am pretty sympathetic to this, but I don't think my attitude needs to be generalized. It's not the only way to engage with art. And though I may try harder to get something out of a show I have to travel far to see, that doesn't mean I necessarily cruise through a nearby show on autopilot.
The idea that difficulty is necessary to have a "real" art experience is similar to the idea that something more real happens when the art encounter is "spontaneous" -- being surprised by the beauty of a sunset, etc. It is always tempting to extrapolate a dogma out of such experiences when they are profoundly affecting, but that would be a mistake. I don't think there is a prescription for assuring edifying aesthetic moments. Instead, when people try to push some recipe for the aesthetic onto someone else, they are imposing an encapsulated version of a status hierarchy that favors them. Ultimately, whatever they are pushing now (no matter how universal the principles are presented to be) will be repudiated later in pursuit of some fresh form of distinction. Isn't this an extremely elitist question: "How much more likely are you to bail on, say, Apichatpong Weerasethakul’s Uncle Boonmee Who Can Recall His Past Lives, when with a few clicks of your remote you can be watching a favorite episode of Friday Night Lights?" This seems to mean: You dummies should be watching the hard stuff (like me) but instead you are weak and let the technology trick you into watching what is mere middlebrow entertainment. You're trapped in your own lazy tastes.
Adams points out that "we carry around unspoken assumptions about what’s long and what’s short, what’s easy and what’s hard, and when those assumptions calcify, we may no longer be aware they’re there." Yes, this is how ideology typically works, and it extends far beyond how we choose to entertain ourselves. Making ourselves aware of our unthinking assumptions about what is common sense is probably always a good and worthy practice. But we don't encourage people to join in that project when we imply that the reason it is necessary is so that they can conform to some other dogma about what cultural product is correct and appropriate. That replaces one politicized mystification with another. Yes, Netflix -- like may consumer goods manufacturers -- would probably love it if we consumed simplistic mind candy as quickly and as often as possible; that's good business for them. And that incentive contributes to their trying to shape and promulgate a certain ideology about what it is fun to do. Their pay structure contributes to a materialization of that ideology. Convenience almost always serves an agenda of accelerated consumption, which is passed off as maximized happiness or efficiency. (You've consumed more, so you are better off!) But implying that people need to consume the "right" things instead of the convenient things seems to substitute an elitist ideology for a consumerist one, and may trigger reactionary retrenchment among the consumerists one may be trying to rescue with screenings of Bela Tarr films and copies of Metal Circus.
In 2007 I made the argument that subscription services "almost make the idea of having selective musical taste superfluous. Not there is anything wrong with that; musical taste's centrality to identity seems a peculiar quirk. Nonetheless, taste in commercial music comes down to what music you are willing to pay for specifically. If you are paying to have it all, you effectively have no taste." That is, in a consumer society we have this sense that you have to put your money where your mouth is to "prove" your taste. The idea that you need to suffer to acquire access to "real" art in order to appreciate it has a similar inflection to it -- that art needs to be scarce to have an aura of significance, which derives from people earning/paying for the privilege to consume it. But it seems more interesting to break out of the idea that scarcity imposes some mystical meaning on things to see what they might mean beyond that.
But what does that mean? Why does everything have to be graded? And does an unconscious grade have any meaning? If you can't bother to make the effort to make your tastes conscious, then what difference does it make to you what you watch? And why should anyone else care? Adams is concerned that the great works may be lost to history if streaming services don't assimilate them to their streaming libraries: "Spotify’s great, unless you want to listen to anything Hüsker Dü recorded before its major-label debut. Would you trade New Day Rising for the Black Eyed Peas catalogue?" This doesn't strike me as a serious question. If you badly want to hear New Day Rising, try this. If you care about music, you probably won't let Spotify dictate what you can or can't hear, and digital reproduction has made it fairly likely that digital copies of everything will survive and proliferate. (Our real archival concern should be with the survival of analog artifacts that have yet to be digitized -- even though digitization may lead to a not entirely representative version of a work surviving.) The people who have a lot invested in their entertainment choices will supplement streaming services with ready alternatives. The people who don't diversify their supply basically don't really care, and why should they? Because certain art is good for them, and they should be made to consume it through clever institutionalized market nudges?
Adams's implicit concern seems to be that the tasteless masses will be left to languish in their cultural ignorance because the streaming services they thoughtlessly adopt don't force more redeeming content on them. And he also seems to think that if you are not cleaver enough to make redemptive consumer quests for the great works, you will be too dim or disinterested to understand them: "If you’re not inclined to put forth the effort to get yourself in close proximity to a given artwork, will you be willing to expend the mental energy necessary to understand it?" Apparently if one lives next door to the Prado, Goya's works there become more or less indistinguishable for you from Hagar the Horrible comics.
Working hard to gain access to a work has nothing intrinsic to do with being willing or able to interpret it. Adams offers an S&M take on art appreciation, that art should dominate and master us while we subserviently mold ourselves to its masterful lessons: "the viewer—not, please, the consumer—is fundamentally subservient to a work of art, in which it is our responsibility, and often our pleasure, to come to the work rather than expecting it to come to us. After all, shouldn’t art be inconvenient, if not in the sense of being difficult to access, then because it forces us out of our comfort zones, requiring us to reckon with its way of understanding the world?" I am pretty sympathetic to this, but I don't think my attitude needs to be generalized. It's not the only way to engage with art. And though I may try harder to get something out of a show I have to travel far to see, that doesn't mean I necessarily cruise through a nearby show on autopilot.
The idea that difficulty is necessary to have a "real" art experience is similar to the idea that something more real happens when the art encounter is "spontaneous" -- being surprised by the beauty of a sunset, etc. It is always tempting to extrapolate a dogma out of such experiences when they are profoundly affecting, but that would be a mistake. I don't think there is a prescription for assuring edifying aesthetic moments. Instead, when people try to push some recipe for the aesthetic onto someone else, they are imposing an encapsulated version of a status hierarchy that favors them. Ultimately, whatever they are pushing now (no matter how universal the principles are presented to be) will be repudiated later in pursuit of some fresh form of distinction. Isn't this an extremely elitist question: "How much more likely are you to bail on, say, Apichatpong Weerasethakul’s Uncle Boonmee Who Can Recall His Past Lives, when with a few clicks of your remote you can be watching a favorite episode of Friday Night Lights?" This seems to mean: You dummies should be watching the hard stuff (like me) but instead you are weak and let the technology trick you into watching what is mere middlebrow entertainment. You're trapped in your own lazy tastes.
Adams points out that "we carry around unspoken assumptions about what’s long and what’s short, what’s easy and what’s hard, and when those assumptions calcify, we may no longer be aware they’re there." Yes, this is how ideology typically works, and it extends far beyond how we choose to entertain ourselves. Making ourselves aware of our unthinking assumptions about what is common sense is probably always a good and worthy practice. But we don't encourage people to join in that project when we imply that the reason it is necessary is so that they can conform to some other dogma about what cultural product is correct and appropriate. That replaces one politicized mystification with another. Yes, Netflix -- like may consumer goods manufacturers -- would probably love it if we consumed simplistic mind candy as quickly and as often as possible; that's good business for them. And that incentive contributes to their trying to shape and promulgate a certain ideology about what it is fun to do. Their pay structure contributes to a materialization of that ideology. Convenience almost always serves an agenda of accelerated consumption, which is passed off as maximized happiness or efficiency. (You've consumed more, so you are better off!) But implying that people need to consume the "right" things instead of the convenient things seems to substitute an elitist ideology for a consumerist one, and may trigger reactionary retrenchment among the consumerists one may be trying to rescue with screenings of Bela Tarr films and copies of Metal Circus.
In 2007 I made the argument that subscription services "almost make the idea of having selective musical taste superfluous. Not there is anything wrong with that; musical taste's centrality to identity seems a peculiar quirk. Nonetheless, taste in commercial music comes down to what music you are willing to pay for specifically. If you are paying to have it all, you effectively have no taste." That is, in a consumer society we have this sense that you have to put your money where your mouth is to "prove" your taste. The idea that you need to suffer to acquire access to "real" art in order to appreciate it has a similar inflection to it -- that art needs to be scarce to have an aura of significance, which derives from people earning/paying for the privilege to consume it. But it seems more interesting to break out of the idea that scarcity imposes some mystical meaning on things to see what they might mean beyond that.
Marshall McLuhan Centennial (21 July 2011)
To mark the 100th anniversary of the birth of Canadian media guru Marshall McLuhan, Megan Garber has an extensive post about his ideas at the Neiman Journalism Lab site, pointing out, somewhat cryptically, that "McLuhan’s theories seem epic and urgent and obvious all at the same time. And McLuhan himself — the teacher, the thinker, the darling of the media he both measured and mocked — seems both more relevant, and less so, than ever before." I think that means that we take McLuan's useful insights more or less for granted even as they shape the contours of the debate about the impact of mediatization. McLuhan certainly wasn't afraid to make sweeping, unsubstantiated generalizations, which definitely makes his account of history occasionally "epic," but almost unfalsifiable as well. So sometimes it seems like McLuhan is just relabeling phenomena (this is a "hot" medium, this is a "cold" one) without performing much analysis, translating things into jargon without necessarily developing arguments.
Garber notes a recent essay by Paul Ford about the media's role in imposing narratives on the flux of events and regularizing time and points out that "If McLuhan is to be believed, the much-discussed and often-assumed human need for narrative — or, at least, our need for narrative that has explicit beginnings and endings — may be contingent rather than implicit." That is, the norms of our reading, or rather our media consumption generally, are shaped by existing levels of technology and how that technology is assimilated socially. We don't come hardwired with a love of stories, as literary humanists sometimes insist. Narrative conventions are always part of what society is always in the process of negotiating -- they are political, ideological, like just about every other kind of relation. McLuhan believed that new media forms would retribalize humanity, undoing some of the specific sorts of freedoms market society (which he links specifically to books and literacy) guaranteed and introducing different ways to construe it. The danger, as Garber implies, is that we will get swallowed by real time, which old media broke into manageable increments but which mew media has redissolved. This opens up possibilities of deliberate disorientation and unsustainable acceleration of consumption.
Anyway, I recently read McLuhan's Undertstanding Media and this is what I took away from it. The general gist is that print media support individualism and economistic rationality: "If Western literate man undergoes much dissociation of inner sensibility from his use of the alphabet, he also wins his personal freedom to dissociate himself from clan and family" (88). Literacy, in McLuhan's view, makes capitalist-style consumer markets possible: "Nonliterate societies are quite lacking in the psychic resources to create and sustain the enormous structures of statistical information that we call markets and prices.... The extreme abstraction and detachment represented by our pricing system is quite unthinkable and unusable amidst populations for whom the exciting drama of price haggling occurs with every transaction" (137). This ties in to the idea that humans must learn to be rational in an economic sense -- that such calculation is not inherent but socially constructed. Capitalist society (and its media) equips us with this form of reason during the process of subjectivation.
But the atomized, anonymized individuals of the literate world are prone to anomie, to being "massified." Whereas subsequent media (more immersive and real-time; accelerated) are returning culture dialectically to a more "tribal" orientation -- the "global village." We collectively try to defeat time by pursuing the instantaneousness of new media; this speed, this accelerated transience begins to undo economism in favor of some new collectivity. "Fragmented, literate and visual individualism is not possible in an electrically patterned and imploded society" (51). So it's obvious why the P2P types and the technoutopian futurists are attracted to McLuhan, who more or less established their rhetorical mode. But McLuhan occasionally issues some warnings about the mediated future as well. This, for example, seems like a prescient critique of the attention economy and recommendation engines:
And later he writes, "The avid desire of mankind to prostitute itself stands up against the chaos of revolution" (189). In other words, technology will be commercialized rather than become subversive.
McLuhan claims that "the effect of electric technology had at first been anxiety. Now it appears to create boredom" (26). That is, it exacerbates the paradoxes of choice, encourages us to suspend decision making for as long as possible, since switching among a newly vast array of alternatives appears easy. But such suspension, such switching may have hidden cognitive costs, may contribute to ego depletion. He points out how technology tends to accelerate exchange, noting that, for example, "by coordinating and accelerating human meetings and goings-on, clocks increase the sheer quantity of human exchange." This seems to be a structural fit with capitalism's need to maximize exchange to maximize opportunities to realize profit. Photographs, too, create a world of "accelerated transience" (196).
He also notes that certain technologies seek to make self-service labor possible, eliminating service requirements and prompting us to take on more responsibility for ourselves as a form of progress (36). That is, technology institutes convenience as a desirable value that trumps other values; ease and efficiency make collectivity appear progressively more annoying, a social ill to be eradicated in the name of individualist freedom, the only freedom that counts.
McLuhan anticipates the rise of immaterial labor, as "commodities themselves assume more and more the character of information" -- they become signifiers, bearers of design distinctions and lifestyle accents. "As electric information levels rise, almost any kind of material will serve any kind of need or function, forcing the intellectual more and more into the role of social command and into the service of production." Hence the rise of the "creative class" and the importance of social production, building brands and meanings and distributing them authoritatively. Manufacturing becomes a pretense for information, where the real profit margins are:
McLuhan insists that "the product matters less as the audience participation increases" -- that is because that participation is the product, the manufactured good, the pretense. "Any acceptable ad is a vigorous dramatization of communal experience," McLuhan claims (228); by this I think he might mean that ads plunge us into visceral experience of what Baudrillard calls the "code" of consumerism. McLuhan asserts that ads draw us into neotribal experiences of collectivity; I think this claim is undermined by the rise of personalization and design ideology. We collectively participate in the idea of customizing our consumer goods, but finding a unique angle on this common culture is the main avenue for hipster distinction. We craft our own niche for ourselves, and become anxious to isolate ourselves from others within the various constituencies brands create for themselves. Belonging to the communities facilitated by media products fosters a simultaneous tension to escape their embrace, o make one's participation singular. That is to say, media participation is as competitive as it is collaborative.
In the last chapter, McLuhan says this of the future of work:
This sounds a lot like the autonomist idea of the general intellect, which kicks in after automation becomes standard in industry. McLuhan's way of putting it: "Many people, in consequence, have begun to look on the whole of society as a single unified machine for creating wealth.... With electricity as energizer and synchronizer, all aspects of production, consumption, and organization become incidental to communications." He suggests that the only profession of the future will be teacher. We will be all teaching each other new ways to please and divert ourselves, new ways to want more things. Learning itself becomes "the principal form of production and consumption" (351). That sounds like a good thing, but one must factor in the ramifications of widespread, institutionalized narcissism, which leads us to become experts in one very particular subject: ourselves. When the alleged structural unemployment subsides, this is the sort of service economy we will be left with -- the full flowering of communicative capitalism. We are consigned by automation to industrialized, mass-produced individuality that we must never stop blathering about.
Garber notes a recent essay by Paul Ford about the media's role in imposing narratives on the flux of events and regularizing time and points out that "If McLuhan is to be believed, the much-discussed and often-assumed human need for narrative — or, at least, our need for narrative that has explicit beginnings and endings — may be contingent rather than implicit." That is, the norms of our reading, or rather our media consumption generally, are shaped by existing levels of technology and how that technology is assimilated socially. We don't come hardwired with a love of stories, as literary humanists sometimes insist. Narrative conventions are always part of what society is always in the process of negotiating -- they are political, ideological, like just about every other kind of relation. McLuhan believed that new media forms would retribalize humanity, undoing some of the specific sorts of freedoms market society (which he links specifically to books and literacy) guaranteed and introducing different ways to construe it. The danger, as Garber implies, is that we will get swallowed by real time, which old media broke into manageable increments but which mew media has redissolved. This opens up possibilities of deliberate disorientation and unsustainable acceleration of consumption.
Anyway, I recently read McLuhan's Undertstanding Media and this is what I took away from it. The general gist is that print media support individualism and economistic rationality: "If Western literate man undergoes much dissociation of inner sensibility from his use of the alphabet, he also wins his personal freedom to dissociate himself from clan and family" (88). Literacy, in McLuhan's view, makes capitalist-style consumer markets possible: "Nonliterate societies are quite lacking in the psychic resources to create and sustain the enormous structures of statistical information that we call markets and prices.... The extreme abstraction and detachment represented by our pricing system is quite unthinkable and unusable amidst populations for whom the exciting drama of price haggling occurs with every transaction" (137). This ties in to the idea that humans must learn to be rational in an economic sense -- that such calculation is not inherent but socially constructed. Capitalist society (and its media) equips us with this form of reason during the process of subjectivation.
But the atomized, anonymized individuals of the literate world are prone to anomie, to being "massified." Whereas subsequent media (more immersive and real-time; accelerated) are returning culture dialectically to a more "tribal" orientation -- the "global village." We collectively try to defeat time by pursuing the instantaneousness of new media; this speed, this accelerated transience begins to undo economism in favor of some new collectivity. "Fragmented, literate and visual individualism is not possible in an electrically patterned and imploded society" (51). So it's obvious why the P2P types and the technoutopian futurists are attracted to McLuhan, who more or less established their rhetorical mode. But McLuhan occasionally issues some warnings about the mediated future as well. This, for example, seems like a prescient critique of the attention economy and recommendation engines:
Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit from taking a lease on our eyes and ears and nerves, we don't really have any rights left. (68)
And later he writes, "The avid desire of mankind to prostitute itself stands up against the chaos of revolution" (189). In other words, technology will be commercialized rather than become subversive.
McLuhan claims that "the effect of electric technology had at first been anxiety. Now it appears to create boredom" (26). That is, it exacerbates the paradoxes of choice, encourages us to suspend decision making for as long as possible, since switching among a newly vast array of alternatives appears easy. But such suspension, such switching may have hidden cognitive costs, may contribute to ego depletion. He points out how technology tends to accelerate exchange, noting that, for example, "by coordinating and accelerating human meetings and goings-on, clocks increase the sheer quantity of human exchange." This seems to be a structural fit with capitalism's need to maximize exchange to maximize opportunities to realize profit. Photographs, too, create a world of "accelerated transience" (196).
He also notes that certain technologies seek to make self-service labor possible, eliminating service requirements and prompting us to take on more responsibility for ourselves as a form of progress (36). That is, technology institutes convenience as a desirable value that trumps other values; ease and efficiency make collectivity appear progressively more annoying, a social ill to be eradicated in the name of individualist freedom, the only freedom that counts.
McLuhan anticipates the rise of immaterial labor, as "commodities themselves assume more and more the character of information" -- they become signifiers, bearers of design distinctions and lifestyle accents. "As electric information levels rise, almost any kind of material will serve any kind of need or function, forcing the intellectual more and more into the role of social command and into the service of production." Hence the rise of the "creative class" and the importance of social production, building brands and meanings and distributing them authoritatively. Manufacturing becomes a pretense for information, where the real profit margins are:
At the end of the mechanical age people still imagined that press and radio and even TV were merely forms of information paid for by the makers and users of "hardware," like cars and soap and gasoline. As automation takes hold, it becomes obvious that information is the crucial commodity, and that solid products are merely incidental to information movement. The early stages by which information itself became the basic economic commodity of the electric age were obscured by the ways in which advertising and entertainment put people off the track. Advertisers pay for space and time in paper and magazine, on radio and TV; that is, they buy a piece of the reader, listener, or viewer as definitely as if they hired our homes for a public meeting. They would gladly pay the reader, listener, or viewer directly for his time and attention if they knew how to do so. The only way so far devised is to put on a free show. Movies in America have not developed advertising intervals simply because the movie itself is the greatest of all forms of advertisement for consumer goods.
McLuhan insists that "the product matters less as the audience participation increases" -- that is because that participation is the product, the manufactured good, the pretense. "Any acceptable ad is a vigorous dramatization of communal experience," McLuhan claims (228); by this I think he might mean that ads plunge us into visceral experience of what Baudrillard calls the "code" of consumerism. McLuhan asserts that ads draw us into neotribal experiences of collectivity; I think this claim is undermined by the rise of personalization and design ideology. We collectively participate in the idea of customizing our consumer goods, but finding a unique angle on this common culture is the main avenue for hipster distinction. We craft our own niche for ourselves, and become anxious to isolate ourselves from others within the various constituencies brands create for themselves. Belonging to the communities facilitated by media products fosters a simultaneous tension to escape their embrace, o make one's participation singular. That is to say, media participation is as competitive as it is collaborative.
In the last chapter, McLuhan says this of the future of work:
The future of work consists of earning a living in the automation age. This is a familiar pattern in electric technology in general. It ends the old dichotomies between culture and technology, between art and commerce, and between work and leisure. Whereas in the mechanical age of fragmentation leisure had been the absence of work, or mere idleness, the reverse is true in the electric age. As the age of information demands the simultaneous use of all our faculties, we discover that we are most at leisure when we are most intensely involved, very much as with the artists in all ages.
This sounds a lot like the autonomist idea of the general intellect, which kicks in after automation becomes standard in industry. McLuhan's way of putting it: "Many people, in consequence, have begun to look on the whole of society as a single unified machine for creating wealth.... With electricity as energizer and synchronizer, all aspects of production, consumption, and organization become incidental to communications." He suggests that the only profession of the future will be teacher. We will be all teaching each other new ways to please and divert ourselves, new ways to want more things. Learning itself becomes "the principal form of production and consumption" (351). That sounds like a good thing, but one must factor in the ramifications of widespread, institutionalized narcissism, which leads us to become experts in one very particular subject: ourselves. When the alleged structural unemployment subsides, this is the sort of service economy we will be left with -- the full flowering of communicative capitalism. We are consigned by automation to industrialized, mass-produced individuality that we must never stop blathering about.
The End of the Consumer Society (20 July 2011)
When the financial crisis began in earnest, lots of articles began appearing about the "new frugality" and the inevitable change in values that would occur in the absence of easy debt financing for consumer spending sprees. Financial analysts like David Rosenberg were pushing the argument that we would experience "secular changes in attitudes towards credit, savings, discretionary spending and homeownership" (original link broken, I cited it here) that would prevent a return to a consumer-driven economy. Apparently that's exactly what's happening, judging by this column from Sunday's NYT by David Leonhardt: "Consumer spending will not soon return to the growth rates of the 1980s and ’90s," he avers. "They depended on income people didn’t have." The evidence:
Jared Bernstein is somewhat skeptical of this, pointing out that consumer spending's share of GDP hasn't declined and that the actual structural economic change is in residential investment. Though some economic commentators have been expecting an imminent housing recovery, Scott Sumner makes the case that housing seems unlikely to pick up because the rate of household formation has slowed dramatically, which he attributes to immigration crackdowns and the fact that more "20-somethings who can’t get jobs are living with their parents."
This concatenation of persistent joblessness, frugality, economic stagnation, and grown-up children living with their parents is reminiscent of what happened during Japan's "lost decade," and it probably won't be long it will be until we begin reading more about the American equivalent of hikikomori. I have been generally skeptical that consumerist attitudes would change regardless of income, because I see consumerism as a reflection of using goods to rearticulate status hierarchies, not as materialism as such. Goods are a communicative system; the new frugality alters the meanings of some of the terms but doesn't impoverish the language altogether. Here's what I wrote before:
But has the specter of slackerdom now been cast across the land? Has compulsive frugality moved us beyond competitive conspicuous consumption and the corrosive values of consumerism, despite those values' deep embeddedness in the discourses that structure our society? In the face of the enormous bulk of advertising and the engrained notion that status must be indicated through savvy product choices, have we really started to turn our backs on consumerism and adopt a post-materialistic attitude, as so many cultural critics have long urged? Maybe those people living paycheck to paycheck who are running out of money at the end of the month are learning to see that it's actually best that way. They are enjoying the really important things that deprivation can reacquaint them with -- togetherness, family, nature, and so on. Likewise, underemployment is a chance to enjoy the riches of leisure, if you can block out the nagging insecurities of precarity from your mind.
At the Economist's American politics blog, Will WIlkinson recently champions the post-materialist way, citing this survey as proof that prosperity engenders a shift in values toward autonomy and self-expression once economic survival is guaranteed. Presumably being able to show how creative you are on your own terms becomes more important than showing how much stuff you have after working hard for someone else. Wilkinson's counterpart at the blog, Matt Steinglass, concurs, writing, "What I'm trying to say here is that it seems to me that people may just be sick of buying new stuff. Or at least of buying the kinds of new stuff that the consumer economy of recent decades has been based on producing." Steinglass suggests arecovery will be driven by collectively demanded goods -- infrastructure and the like -- but of course that would need to be administered by the state, which is politically infeasible, given today's GOP, which doesn't care about economic recovery or rotting infrastructure but only its rentier clients.
Has there been a general shift in values, though? Do people want less stuff and are thus willing to work less? Do we choose unemployment over drudgery and better appliances? Are we all eager to "take our share of the economic surplus in leisure," as it's sometimes technocratically expressed? (I wonder if this is a reason new households aren't forming. Opting out of parenting, say, is a frugal lifestyle choice.) Reading values from macroeconomic data seems like slippery hermeneutics. (The mere fact of a drop in consumer spending doesn't necessarily mean a drop in the desire to spend, unless you assume revealed preference is the only reality that matters.) Wilkinson makes the case that for creative-class types, being an economic free agent isn't so terrible once you choose autonomy over material goods. Rather than make as much as you can, you can be a "threshold earner," make what you need for your minimalist lifestyle, and then segue into "medium chill" mode, to use David Roberts's coinage: "This is me," Wilkinson admits. "I don't want to maximise income. I want to maximise autonomy and time for unremunerative but satisfying creative work." To an extent, that is me also, and it's indicative of my relative privilege and my inherited social and cultural capital and whatnot -- it's a reflection of having a safety net in place (in the form of supportive relatives, inherited income, social capital, etc.) and the confidence that one could find more work whenever necessary. But, as Wilkinson notes, "whatever our level of education, if unemployment benefits and odd jobs add up to enough to keep us above a socially acceptable material threshold, we will not be in a hurry to accept any available employment, no matter how unpleasant or unsuitable." That sounds a little like work refusal and is perhaps why the safety net is under such systematic assault. This is precisely the sort of value shift capitalism can't afford. If capitalists lose their leverage over labor, from whom will they extract surplus value? If you can't force people to work for you and enrich yourself from it, what's the sense of being a capitalist? (Ideally the capitalists would see it that way as well, surrender to the multitude, and help usher in the great socialist millennium.)
Consumerist values have always been deployed to militate against work refusal, but they don't work as an effective carrot when there isn't enough money in circulation, or enough wages, to permit people to play that game. (Section three of this article by Joshua Clover on the financial crisis offers an overview of how and why this can happen). You can try to get people to strive for alternate forms of currency (attention, status) but if that doesn't work, that's when the stick of precarity comes into play. It seems like the end of consumerism because the stick is out and the carrot has vanished. Leonhardt, et al., are arguing that the carrot can't come ever back, which seems to imply precarity forever. If that's the case, the natural ideological move would be to sell precarity as liberation, just as consumer choice was once sold, while fighting to assure that liberation never occurs in practice, the safety net is perennially endangered, and autonomy remains instead merely a plausible dream form most people.
The Federal Reserve Bank of New York recently published a jarring report on what it calls discretionary service spending, a category that excludes housing, food and health care and includes restaurant meals, entertainment, education and even insurance. Going back decades, such spending had never fallen more than 3 percent per capita in a recession. In this slump, it is down almost 7 percent, and still has not really begun to recover.
Jared Bernstein is somewhat skeptical of this, pointing out that consumer spending's share of GDP hasn't declined and that the actual structural economic change is in residential investment. Though some economic commentators have been expecting an imminent housing recovery, Scott Sumner makes the case that housing seems unlikely to pick up because the rate of household formation has slowed dramatically, which he attributes to immigration crackdowns and the fact that more "20-somethings who can’t get jobs are living with their parents."
This concatenation of persistent joblessness, frugality, economic stagnation, and grown-up children living with their parents is reminiscent of what happened during Japan's "lost decade," and it probably won't be long it will be until we begin reading more about the American equivalent of hikikomori. I have been generally skeptical that consumerist attitudes would change regardless of income, because I see consumerism as a reflection of using goods to rearticulate status hierarchies, not as materialism as such. Goods are a communicative system; the new frugality alters the meanings of some of the terms but doesn't impoverish the language altogether. Here's what I wrote before:
Consumerist society has for too long emphasized possessions as the route to social recognition, not collaboration. The tangibility of objects seems to substantiate the argument -- the inarguable presence of more stuff seems to testify to a richer life, and marketing gives all that stuff rich meanings and fully developed fantasies we can readily enter into vicariously. And our ability to soberly question consumerism's role in our lives is hampered not only by our hedonism (the familiar and common-sense-seeming logic of "more is better") but by the persuasion industry's relentless collective efforts to invalidate ways of life that are not reliant on consuming products. Lifestyle magazines and the styles sections in newspapers help by making frugality into a trend that is marked by buying certain products and shopping at certain stores. The underlying message: We can spend less but remain consumers. So we don't need to fear.And here's another post I wrote in April 2009 about the "gleefully frugal"
Taking its cue from the press, the ad business will try to sell us anticonsumer goods, goods that paradoxically promise to fit in to our new recession-minded lifestyle. This not only helps ad firms continue to sell ads in a downmarket, but also helps ads maintain their prominence in the sum of our daily thinking. Ads preserve a lever in our minds, so they can reorient us to luxury when the time comes.
But has the specter of slackerdom now been cast across the land? Has compulsive frugality moved us beyond competitive conspicuous consumption and the corrosive values of consumerism, despite those values' deep embeddedness in the discourses that structure our society? In the face of the enormous bulk of advertising and the engrained notion that status must be indicated through savvy product choices, have we really started to turn our backs on consumerism and adopt a post-materialistic attitude, as so many cultural critics have long urged? Maybe those people living paycheck to paycheck who are running out of money at the end of the month are learning to see that it's actually best that way. They are enjoying the really important things that deprivation can reacquaint them with -- togetherness, family, nature, and so on. Likewise, underemployment is a chance to enjoy the riches of leisure, if you can block out the nagging insecurities of precarity from your mind.
At the Economist's American politics blog, Will WIlkinson recently champions the post-materialist way, citing this survey as proof that prosperity engenders a shift in values toward autonomy and self-expression once economic survival is guaranteed. Presumably being able to show how creative you are on your own terms becomes more important than showing how much stuff you have after working hard for someone else. Wilkinson's counterpart at the blog, Matt Steinglass, concurs, writing, "What I'm trying to say here is that it seems to me that people may just be sick of buying new stuff. Or at least of buying the kinds of new stuff that the consumer economy of recent decades has been based on producing." Steinglass suggests arecovery will be driven by collectively demanded goods -- infrastructure and the like -- but of course that would need to be administered by the state, which is politically infeasible, given today's GOP, which doesn't care about economic recovery or rotting infrastructure but only its rentier clients.
Has there been a general shift in values, though? Do people want less stuff and are thus willing to work less? Do we choose unemployment over drudgery and better appliances? Are we all eager to "take our share of the economic surplus in leisure," as it's sometimes technocratically expressed? (I wonder if this is a reason new households aren't forming. Opting out of parenting, say, is a frugal lifestyle choice.) Reading values from macroeconomic data seems like slippery hermeneutics. (The mere fact of a drop in consumer spending doesn't necessarily mean a drop in the desire to spend, unless you assume revealed preference is the only reality that matters.) Wilkinson makes the case that for creative-class types, being an economic free agent isn't so terrible once you choose autonomy over material goods. Rather than make as much as you can, you can be a "threshold earner," make what you need for your minimalist lifestyle, and then segue into "medium chill" mode, to use David Roberts's coinage: "This is me," Wilkinson admits. "I don't want to maximise income. I want to maximise autonomy and time for unremunerative but satisfying creative work." To an extent, that is me also, and it's indicative of my relative privilege and my inherited social and cultural capital and whatnot -- it's a reflection of having a safety net in place (in the form of supportive relatives, inherited income, social capital, etc.) and the confidence that one could find more work whenever necessary. But, as Wilkinson notes, "whatever our level of education, if unemployment benefits and odd jobs add up to enough to keep us above a socially acceptable material threshold, we will not be in a hurry to accept any available employment, no matter how unpleasant or unsuitable." That sounds a little like work refusal and is perhaps why the safety net is under such systematic assault. This is precisely the sort of value shift capitalism can't afford. If capitalists lose their leverage over labor, from whom will they extract surplus value? If you can't force people to work for you and enrich yourself from it, what's the sense of being a capitalist? (Ideally the capitalists would see it that way as well, surrender to the multitude, and help usher in the great socialist millennium.)
Consumerist values have always been deployed to militate against work refusal, but they don't work as an effective carrot when there isn't enough money in circulation, or enough wages, to permit people to play that game. (Section three of this article by Joshua Clover on the financial crisis offers an overview of how and why this can happen). You can try to get people to strive for alternate forms of currency (attention, status) but if that doesn't work, that's when the stick of precarity comes into play. It seems like the end of consumerism because the stick is out and the carrot has vanished. Leonhardt, et al., are arguing that the carrot can't come ever back, which seems to imply precarity forever. If that's the case, the natural ideological move would be to sell precarity as liberation, just as consumer choice was once sold, while fighting to assure that liberation never occurs in practice, the safety net is perennially endangered, and autonomy remains instead merely a plausible dream form most people.
Labels:
austerity,
new elites,
new frugality,
precarity,
unemployment
Google and goon squads (15 July 2011)
This December 2010 post by Peter Frase, addressing how capitalism might cope with technology's diminishing the need for labor inputs, has deservedly been put into broader circulation by Matt Yglesias and Metafilter. Frase sets up a thought experiment based on the Star Trek fantasy of a world in which productive labor has been rendered unnecessary, energy supplies are inexhaustible, and all humans apparently share in universal prosperity. Given these conditions, Frase wonders "how would it be possible to maintain a system based on money, profit, and class power?" Would capitalist relations continue to organize society even in the absence of the scarcities that legitimize those relations? If so, how? (Also, are we headed to this sort of society, given the persistence of unemployment and the arguably structural problems with Western economies that economist Michael Spence discusses here?)
Frase imagines that such a society would lean heavily on intellectual property law, presumably enforced by a draconian, all-encompassing surveillance state. It's not too hard to imagine Google facilitating this under the Orwellian auspices of "Don't be evil," especially after reading this article by Evgeny Morozov. "History is rife with examples of how benign and humanistic ideals can yield rather insidious outcomes—especially when backed by unchecked power and messianic rhetoric," he notes, and cites Said Vaidhyanathan's argument from The Googlization of Everything that asserts "the triumph of neoliberalism has made the 'notion of gentle, creative state involvement to guide processes toward the public good ... impossible to imagine, let alone propose.' " As manufacturing increasingly becomes a matter of information rather than manpower, Google's control of the information economy will potentially afford it the opportunity to implement a social structure. We would all essentially work for Google, whether (to draw on Frase's categories of post-productive labor) we are producing, sorting, and circulating content to attenuate its social value -- immaterial labor, by Lazzarato's definition, which Hardt expands to affective labor; I've written a bunch of posts about this sort of thing -- or whether we are muscle for intellecual-property enforcement (lawyers and "guard labor," to use the term Frase adopts from this paper).
Both immaterial labor and lateral surveillance seem to be expanding under the auspices of commercial social media and, as Frase notes, gamification, establishing the infrastructure and the mores to prevent informationalization from leading to an expansion of the commons, as P2P enthusiasts hope. Frase links to Yochai Benkler's Wealth of Networks, which sounds an optimistic note about the increased role of sharing and cooperation in production. Benkler's analysis resembles in some ways the Marxist theories regarding the "general intellect" that have evolved out of this cryptic section of the Grundrisse. (My effort at decoding it here.) Hardt and Negri extrapolate from the productive cooperation of the "general intellect" -- the development of which capitalists theoretically must foment to sustain productivity -- something they call the Multitude, a emerging political force that transcends state power and instantiates some sort of spontaneously self-organizing communism made of networks and flows. But it seems as though Web 2.0 companies are developing precisely to pre-empt such possibilities, to enclose the emerging commons and fuse them to structures that emphasize competition and individualism in the midst of enhanced sociality, that foreground status hierarchies rather than dissolve them, that articulate class distinctions rather than undermine them, and so on. Social media foster new forms of "artificial" scarcity (in attention, fame, relevance, identity, etc.) at the same time it eases inequalities in access to cultural goods. We can all download all the music and movies we want and remix them to our hearts content, but this doesn't touch the inequalities that form the basis of class. And reproducing class, guaranteeing that pre-existing inequalities in wealth and power can be reproduced and carried forward even in the absence of more-traditional methods of labor exploitation, is capitalism's primary raison d'etre (not increasing productivity or freedom or the "wealth of nations").
That's an implicit point of Frase's thought experiment, I think -- to suggest that no amount of prosperity or labor reduction will get rid of the class system and the exploitation it engenders structurally. It's not a set of social relations designed to promote equality, but its opposite. It creates a dynamic set of values that protect privilege in the face of abundance, in the face of technological improvements, in the face of developments that threatened to invalidate the aristocratic pretenses to inborn and inaccessible superiority.
Frase wonders where the money will come from to sustain the society of the future if zero-marginal-product workers have no right to expect to earn anything (according to neoclassical economic models) in a post-productive economy.
In the dystopian Google-run world of the future, workers will have attention rankings and goon-squad thug power to oppress one another and promote general insecurity; meanwhile real power and privilege will adhere to the corporation, its big shareholders, and those politicians it patronizes to protect itself.
Frase imagines that such a society would lean heavily on intellectual property law, presumably enforced by a draconian, all-encompassing surveillance state. It's not too hard to imagine Google facilitating this under the Orwellian auspices of "Don't be evil," especially after reading this article by Evgeny Morozov. "History is rife with examples of how benign and humanistic ideals can yield rather insidious outcomes—especially when backed by unchecked power and messianic rhetoric," he notes, and cites Said Vaidhyanathan's argument from The Googlization of Everything that asserts "the triumph of neoliberalism has made the 'notion of gentle, creative state involvement to guide processes toward the public good ... impossible to imagine, let alone propose.' " As manufacturing increasingly becomes a matter of information rather than manpower, Google's control of the information economy will potentially afford it the opportunity to implement a social structure. We would all essentially work for Google, whether (to draw on Frase's categories of post-productive labor) we are producing, sorting, and circulating content to attenuate its social value -- immaterial labor, by Lazzarato's definition, which Hardt expands to affective labor; I've written a bunch of posts about this sort of thing -- or whether we are muscle for intellecual-property enforcement (lawyers and "guard labor," to use the term Frase adopts from this paper).
Both immaterial labor and lateral surveillance seem to be expanding under the auspices of commercial social media and, as Frase notes, gamification, establishing the infrastructure and the mores to prevent informationalization from leading to an expansion of the commons, as P2P enthusiasts hope. Frase links to Yochai Benkler's Wealth of Networks, which sounds an optimistic note about the increased role of sharing and cooperation in production. Benkler's analysis resembles in some ways the Marxist theories regarding the "general intellect" that have evolved out of this cryptic section of the Grundrisse. (My effort at decoding it here.) Hardt and Negri extrapolate from the productive cooperation of the "general intellect" -- the development of which capitalists theoretically must foment to sustain productivity -- something they call the Multitude, a emerging political force that transcends state power and instantiates some sort of spontaneously self-organizing communism made of networks and flows. But it seems as though Web 2.0 companies are developing precisely to pre-empt such possibilities, to enclose the emerging commons and fuse them to structures that emphasize competition and individualism in the midst of enhanced sociality, that foreground status hierarchies rather than dissolve them, that articulate class distinctions rather than undermine them, and so on. Social media foster new forms of "artificial" scarcity (in attention, fame, relevance, identity, etc.) at the same time it eases inequalities in access to cultural goods. We can all download all the music and movies we want and remix them to our hearts content, but this doesn't touch the inequalities that form the basis of class. And reproducing class, guaranteeing that pre-existing inequalities in wealth and power can be reproduced and carried forward even in the absence of more-traditional methods of labor exploitation, is capitalism's primary raison d'etre (not increasing productivity or freedom or the "wealth of nations").
That's an implicit point of Frase's thought experiment, I think -- to suggest that no amount of prosperity or labor reduction will get rid of the class system and the exploitation it engenders structurally. It's not a set of social relations designed to promote equality, but its opposite. It creates a dynamic set of values that protect privilege in the face of abundance, in the face of technological improvements, in the face of developments that threatened to invalidate the aristocratic pretenses to inborn and inaccessible superiority.
Frase wonders where the money will come from to sustain the society of the future if zero-marginal-product workers have no right to expect to earn anything (according to neoclassical economic models) in a post-productive economy.
Thus it seems that the main problem confronting the society of anti-Star Trek is the problem of effective demand: that is, how to ensure that people are able to earn enough money to be able to pay the licensing fees on which private profit depends. Of course, this isn’t so different from the problem that confronted industrial capitalism, but it becomes more severe as human labor is increasingly squeezed out of the system, and human beings become superfluous as elements of production, even as they remain necessary as consumers.He wonders if capitalist ideology would be flexible enough to permit the guaranteed wage system this dilemma seems to require -- people get issued some token amount of money to keep the wheels spinning -- and if this nonetheless implies stagnation, the end of capitalist growth (and possibly capitalism itself). The issue seems to hinge on the difference between that minimal wage paid out (which stultifies its recipients, locks them in class position) and the creation of economic value that continues to accrue to capitalists. The value creators -- the minions of the general intellect -- need some nominal amount of money circulating among themselves to lubricate the gears of the social factory, but enough real value must be extracted from that factory to sustain the class divide -- to forestall redistributive effects. (My postulate is that capitalists will not create or sustain enterprises that redistribute wealth, only ones that concentrate it.) That value probably can't continue to be denominated in the same currency as the wages. Perhaps this is perhaps why more people are becoming content to work for attention, especially in the sectors most transformed by information technology, the ones subsumed by code. Google has indeed rolled out "badges" to reward users for consuming and processing news stories through its interface, as Rob Walker notes here.
In the dystopian Google-run world of the future, workers will have attention rankings and goon-squad thug power to oppress one another and promote general insecurity; meanwhile real power and privilege will adhere to the corporation, its big shareholders, and those politicians it patronizes to protect itself.
Feedback Loops and Self-Consciousness (7 July 2011)
I tend to view reflexivity as a burden, the cost one pays for the broader freedom to shape one's own destiny that modern life has brought to people in wealthy countries. Modernity has brought mediation and mobility and a certain amount of anonymity that lets us become what we want to be, but determining what that is requires a paradoxical sort of self-knowledge -- taking active steps to become what one is supposed to inherently be. This condition is what sociologists like Giddens call ontological insecurity. I'm ceaselessly arguing that technological developments exacerbate this condition while pretending to ameliorate it, mainly because capitalism works better with insecure subjects.
So I'm pretty skeptical of the "quantified self" movement and other efforts to increase the amount of self-knowledge we are burdened with at any given moment. These seem to fundamentally split us, imposing mind/body problems onto us technologically. And they also seem to become self-surveillance, with the data collected on oneself being made available to outside parties for purposes of social control.
I am not persuaded to think otherwise by this Wired article by Thomas Goetz praising the magic power of feedback loops, a banal commonplace idea that is treated here like its some disruptive innovation. Yes, when people are given information about themselves in real time, they will generally change their behavior. In other words self-monitoring changes the self. The observer effect holds for self-awareness. But I'm not sure the resulting changes can be regarded as automatically beneficial; that seems like naive positivism to me. And I couldn't get past my sense that the article existed ultimately to hype a bunch of tech companies and their great gifts to the world, smart meters for the self: "The feedback loop is an age-old strategy revitalized by state-of-the-art technology. As such, it is perhaps the most promising tool for behavioral change to have come along in decades." At points, Goetz's rhetoric is breathless, as when he discusses David Rose, founder of a company that makes devices that get users to take medicine.
Yes, very magical when technology can program us unobtrusively. Take away the benevolent aim of these particular devices and what's left is design as propaganda. How enchanting.
Goetz buys the argument that feedback loops cater to humans' innate striving and are an expression of evolution at work rather than the extension of a regime of quantification and data generation.
All these propositions seem ideological to me: that learning is a matter of self-monitoring, that success must be measured to be valid, that humans inherently crave confirmation of individual status, that feedback taps pre-existing aspirations rather than inculcating them. These propositions support the overriding idea that self-regulation must be put in service of facilitating competition -- the capitalist way, and the essence of the form of subjectivity assumed by neoliberalism. The meaning of our existence is to be calculated on life's great balance sheet, with feedback loops allowing us to perform the requisite accounting duties. At the same time, feedback implicitly makes us personally responsible in real time for the performance being measured. The gadgets that give us real-time feedback are part of the neoliberal imperative to shift risk on to the individual, making concrete the idea that you alone are responsible for how society is failing you. It's right there in the numbers that you need to try harder.
All of this is to say that feedback loops are mechanisms of social control that are all the more effective as they masquerade as self-regulation; they are not liberating forces bequeathed by magic technology firms to help us improve ourselves according to some transcendent goal for ourselves that we devise.
So I'm pretty skeptical of the "quantified self" movement and other efforts to increase the amount of self-knowledge we are burdened with at any given moment. These seem to fundamentally split us, imposing mind/body problems onto us technologically. And they also seem to become self-surveillance, with the data collected on oneself being made available to outside parties for purposes of social control.
I am not persuaded to think otherwise by this Wired article by Thomas Goetz praising the magic power of feedback loops, a banal commonplace idea that is treated here like its some disruptive innovation. Yes, when people are given information about themselves in real time, they will generally change their behavior. In other words self-monitoring changes the self. The observer effect holds for self-awareness. But I'm not sure the resulting changes can be regarded as automatically beneficial; that seems like naive positivism to me. And I couldn't get past my sense that the article existed ultimately to hype a bunch of tech companies and their great gifts to the world, smart meters for the self: "The feedback loop is an age-old strategy revitalized by state-of-the-art technology. As such, it is perhaps the most promising tool for behavioral change to have come along in decades." At points, Goetz's rhetoric is breathless, as when he discusses David Rose, founder of a company that makes devices that get users to take medicine.
Borrowing a concept from cognitive psychology called pre-attentive processing, Rose aims for a sweet spot between these extremes, where the information is delivered unobtrusively but noticeably. The best sort of delivery device “isn’t cognitively loading at all,” he says. “It uses colors, patterns, angles, speed—visual cues that don’t distract us but remind us.” This creates what Rose calls “enchantment.” Enchanted objects, he says, don’t register as gadgets or even as technology at all, but rather as friendly tools that beguile us into action. In short, they’re magical.
Yes, very magical when technology can program us unobtrusively. Take away the benevolent aim of these particular devices and what's left is design as propaganda. How enchanting.
Goetz buys the argument that feedback loops cater to humans' innate striving and are an expression of evolution at work rather than the extension of a regime of quantification and data generation.
Evolution itself, after all, is a feedback loop, albeit one so elongated as to be imperceptible by an individual. Feedback loops are how we learn, whether we call it trial and error or course correction. In so many areas of life, we succeed when we have some sense of where we stand and some evaluation of our progress. Indeed, we tend to crave this sort of information; it’s something we viscerally want to know, good or bad. As Stanford’s Bandura put it, “People are proactive, aspiring organisms.” Feedback taps into those aspirations.
All these propositions seem ideological to me: that learning is a matter of self-monitoring, that success must be measured to be valid, that humans inherently crave confirmation of individual status, that feedback taps pre-existing aspirations rather than inculcating them. These propositions support the overriding idea that self-regulation must be put in service of facilitating competition -- the capitalist way, and the essence of the form of subjectivity assumed by neoliberalism. The meaning of our existence is to be calculated on life's great balance sheet, with feedback loops allowing us to perform the requisite accounting duties. At the same time, feedback implicitly makes us personally responsible in real time for the performance being measured. The gadgets that give us real-time feedback are part of the neoliberal imperative to shift risk on to the individual, making concrete the idea that you alone are responsible for how society is failing you. It's right there in the numbers that you need to try harder.
All of this is to say that feedback loops are mechanisms of social control that are all the more effective as they masquerade as self-regulation; they are not liberating forces bequeathed by magic technology firms to help us improve ourselves according to some transcendent goal for ourselves that we devise.
Life As a Stock Photo (5 July 2011)
I reblogged this bad stock photo on tumblr and was set to be content to leave it at that, but I continued to think about it.
As you can see, it's a picture of fake patrons at a bookstore pretending to shop. The Economist's blog picked it up to illustrate a post about Borders' bankruptcy and the loss of book stores as quasi-public space. I assume the photographer used fake people because you would possibly need to get permission clearances from genuine shoppers if the photo was not documentary? (Somebody out there who knows how this works, please let me know. Comment or email at horning at popmatters.com)
One of my many suspicions is that stock photos such as these will be easily replaceable with far more genuine looking photos from social media pages. This seems like a SEO-generated page, but it gets at another question I have:
But wouldn't it be just as easy for publications to search a Facebook database for a photo and pay them a nominal fee for the rights to it (and drive the photographer's wages to zero)? Is there anything in Facebook's terms of service that explicitly prevents the company from doing that, given that it asserts ownership rights over the stuff people upload there? Or is there a non-commercial licensing regime that protects you from having your photos appear in contexts where you don't want them? Must users rely on Facebook's good will to keep that from happening?
I already felt that simply by sharing anything on Facebook, it ended up recontextualized in ways I couldn't control or understand and this made me feel inordinately and perpetually defensive. When he quit Facebook, Henry Farrell of Crooked Timber noted that "one of the things I like about the Internets is that I can present myself in different ways. This isn’t the result of a lack of integrity – you need to present different ‘selves’ if you want to engage in different kinds of dialogue." That makes perfect sense to me; I think my lack of context control was making me opt out of many potential dialogues altogether. I also feared that all the decontextualized communication was collapsing the boundaries that need to be in place for friendship to thrive. If you start seeing what friends do in their alternate contexts, they can seem less likable, or deceptive, or just not as focused on me as I narcissistically prefer them to be and can pretend they are without any evidence to the contrary. Facebook lets you see your friends consistently at their most preening, posturing, needy, and self-exploiting -- not generally their best sides, but in a Zuckerberg world, their only true side with "integrity."
It would be terrible if Facebook resold images you shared without telling you. But it would be even worse if they offered to cut you in, I think. Social media makes it seem natural and validating that one could consider selling one's snapshots to marketers and publishers so that they could be used for some commercial purpose. Social media invites you to turn your own life into a series of stock photos.
As you can see, it's a picture of fake patrons at a bookstore pretending to shop. The Economist's blog picked it up to illustrate a post about Borders' bankruptcy and the loss of book stores as quasi-public space. I assume the photographer used fake people because you would possibly need to get permission clearances from genuine shoppers if the photo was not documentary? (Somebody out there who knows how this works, please let me know. Comment or email at horning at popmatters.com)
One of my many suspicions is that stock photos such as these will be easily replaceable with far more genuine looking photos from social media pages. This seems like a SEO-generated page, but it gets at another question I have:
Stock Photography companies have lost some of their market share in the digital age, but Flickr is not the biggest cause. The creation of digital cameras has resulted in more people taking and sharing pictures as opposed to the old days of film when there were only a select few in the photography business. Also, non-commercial licenses means that businesses can not use those free photos.So digital cameras make it easy for aspiring photographers to try to enter the business of selling photos; indeed Getty has a partnership with Flickr to try to help amateurs commercialize what they share (and drive labor prices down).
But wouldn't it be just as easy for publications to search a Facebook database for a photo and pay them a nominal fee for the rights to it (and drive the photographer's wages to zero)? Is there anything in Facebook's terms of service that explicitly prevents the company from doing that, given that it asserts ownership rights over the stuff people upload there? Or is there a non-commercial licensing regime that protects you from having your photos appear in contexts where you don't want them? Must users rely on Facebook's good will to keep that from happening?
I already felt that simply by sharing anything on Facebook, it ended up recontextualized in ways I couldn't control or understand and this made me feel inordinately and perpetually defensive. When he quit Facebook, Henry Farrell of Crooked Timber noted that "one of the things I like about the Internets is that I can present myself in different ways. This isn’t the result of a lack of integrity – you need to present different ‘selves’ if you want to engage in different kinds of dialogue." That makes perfect sense to me; I think my lack of context control was making me opt out of many potential dialogues altogether. I also feared that all the decontextualized communication was collapsing the boundaries that need to be in place for friendship to thrive. If you start seeing what friends do in their alternate contexts, they can seem less likable, or deceptive, or just not as focused on me as I narcissistically prefer them to be and can pretend they are without any evidence to the contrary. Facebook lets you see your friends consistently at their most preening, posturing, needy, and self-exploiting -- not generally their best sides, but in a Zuckerberg world, their only true side with "integrity."
It would be terrible if Facebook resold images you shared without telling you. But it would be even worse if they offered to cut you in, I think. Social media makes it seem natural and validating that one could consider selling one's snapshots to marketers and publishers so that they could be used for some commercial purpose. Social media invites you to turn your own life into a series of stock photos.
Labels:
facebook,
forced sharing,
immaterial labor,
photography,
surveillance
Vagaries of attention (1 July 2011)
There's an important distinction between attention and recognition, though I think we easily confuse them in speech and in practice. We seek attention when we want recognition, some sense of our worth or integrity to others. Attention is a necessary prerequisite for recognition, but doesn't always lead to a feeling of having been recognized. My main fear about social media is that is becoming harder to translate attention into recognition without their aid. Increasingly, attention that isn't in some way mediated seems inert, if not unsettling and creepy.
I have this feeling that people are going to become more and more wary of direct face-to-face attention because it will seem like it's wasted on them if it's not mediated, not captured somehow in social networks where it has measurable value. I imagine this playing out as a kind of fear of intimacy as it was once experienced -- private unsharable moments that will seem creepier and creepier because no one else can bear witness to their significance, translate them into social distinction. Recognition within private unmediated spaces will be unsought after, the "real you" won't be there but elsewhere, in the networks.
I have an essay up at the New Inquiry about artist Laurel Nakadate, whose work is, I think, about this emerging condition -- about becoming increasingly unavailable to attention in the moment, wholly ensconced by self-consciousness. Receiving attention in real time can't confirm anything about how you want to feel about yourself; it becomes a portal to a deeper loneliness -- the way out seems to be to mediate the experience, watch it later, transmute it into something else. In short, we are losing the ability to feel recognized in the moment, which strands us further and further away from fully inhabiting our bodies in the present. We are always elsewhere, in the cloud.
I have this feeling that people are going to become more and more wary of direct face-to-face attention because it will seem like it's wasted on them if it's not mediated, not captured somehow in social networks where it has measurable value. I imagine this playing out as a kind of fear of intimacy as it was once experienced -- private unsharable moments that will seem creepier and creepier because no one else can bear witness to their significance, translate them into social distinction. Recognition within private unmediated spaces will be unsought after, the "real you" won't be there but elsewhere, in the networks.
I have an essay up at the New Inquiry about artist Laurel Nakadate, whose work is, I think, about this emerging condition -- about becoming increasingly unavailable to attention in the moment, wholly ensconced by self-consciousness. Receiving attention in real time can't confirm anything about how you want to feel about yourself; it becomes a portal to a deeper loneliness -- the way out seems to be to mediate the experience, watch it later, transmute it into something else. In short, we are losing the ability to feel recognized in the moment, which strands us further and further away from fully inhabiting our bodies in the present. We are always elsewhere, in the cloud.
Facebook Updates and Disinformation (30 June 2011)
Sociologist Nathan Jurgenson has an interesting post about Facebook and his skepticism about proclamations of the end of privacy and anonymity. He deploys the postmodernist/poststructuralist insight that each piece of information shared raises more questions about what hasn't been said, and thus strategic sharing can create different realms of personal privacy and public mystery.
The people who are freaked out by Facebook are the ones who aren’t trying to create an air of mystery about themselves. They are people who don’t want additional attention and don’t want to be snooped on, and don’t want to raise more questions and interest about themselves every time they are compelled to share something or inadvertently share something online. Something as simple as RSVPing for a Facebook event can set off a chain reaction of unwanted curiosity and accidental insult if one’s not careful. But social media mores force us to make such RSVPing a public matter, because it benefits the event thrower to pad out the expected crowd, etc. Sharing usually doesn't serve a personal agenda, even one of self-promotion. Often sharing is default exposure that helps someone else sell your attention and presence (to advertisers, etc.)
The fact that every piece of information is incomplete is precisely why people feel overexposed, because it means that everything that gets shared (often against their will) invites more scrutiny into their lives. This is why they feel like they have lost their privacy. Not because perfect information about them is out there, but because the teasing bits of information circulating seems to orient the surveillance apparatus on them. And that surveillance apparatus is distributed so widely, it feels inescapable that speculative information will be produced about you and attached to your identity online for anyone to find. That is the problem, not oversharing. The end of anonymity is not about people knowing accurate things about you; it’s about enough people who know you being in the micro-gossip business to make you feel unfairly scrutinized and libeled.
Jurgenson points out correctly that “‘Publicity’ on social media needs to be understood fundamentally as an act rife also with its conceptual opposite: creativity and concealment.” It also needs to be understood that of course people who are comfortable with sharing are not exposing their authentic character—even if there were such a thing as an authentic self. The point is that they enjoy constructing that pseudo-celebrity self through social media and feel recognized when they are gossiped about and circulated. But the rest of us are being forced to play their game, on their terms, at an inherent disadvantage.
I don’t want to have to send out reams and reams of disinformation online to “protect” my privacy (which is one of the reasons I am not on Facebook anymore. I thought it was stupid that I only logged in to it to play defense). I don’t want to share only to bury things that have come to embarrass me or prompted responses I don’t like. I don’t believe I have the time or inclination to try to imagine what new creative interpretations and lacunae I can generate with my sharing so as to convey the right impression, cultivate the right sort of fascination with me. I don’t want to be “fascinating.” I don’t want to be “seductive” in the Baudrillardian sense and “create magical and enchanted interest” (to use Jurgenson’s phrase). Others may revel in that fantasy, but I don’t want to have to adopt their code if I can help it.
But it may not be tenable for me to avoid it for much longer. For instance, events I might want to go to will be publicized only through Facebook, and I will end up missing out on everything if I’m not trackable there. Everyone I know will be in the social-media circus tent, and I will have to join them.
Social media confronts us with how little control we have over our public identity, which is put into play and reinterpreted and tossed around while we watch—while all the distortions and gossip gets fed back to us by the automated feedback channels. Some people find this thrilling. Others find it terrible. It’s always been true that we don’t control how we are seen, but at least we could control how much we had to know about it. It’s harder now to be aloof, to be less aware of our inevitable performativity. We are forced instead to fight for the integrity of our manufactured personal brand.
We know that knowledge, including what we post on social media, indeed follows the logic of the fan dance: we always enact a game of reveal and conceal, never showing too much else we have given it all away. It is better to entice by strategically concealing the right “bits” at the right time. For every status update there is much that is not posted. And we know this. What is hidden entices us.I think this is missing the point. I feel like I need to use all caps to stress this: LOTS OF PEOPLE DON'T WANT ATTENTION. They don't want to be enticing. Privacy is not about hiding the truth. It's about being able to avoid the spotlight.
The people who are freaked out by Facebook are the ones who aren’t trying to create an air of mystery about themselves. They are people who don’t want additional attention and don’t want to be snooped on, and don’t want to raise more questions and interest about themselves every time they are compelled to share something or inadvertently share something online. Something as simple as RSVPing for a Facebook event can set off a chain reaction of unwanted curiosity and accidental insult if one’s not careful. But social media mores force us to make such RSVPing a public matter, because it benefits the event thrower to pad out the expected crowd, etc. Sharing usually doesn't serve a personal agenda, even one of self-promotion. Often sharing is default exposure that helps someone else sell your attention and presence (to advertisers, etc.)
The fact that every piece of information is incomplete is precisely why people feel overexposed, because it means that everything that gets shared (often against their will) invites more scrutiny into their lives. This is why they feel like they have lost their privacy. Not because perfect information about them is out there, but because the teasing bits of information circulating seems to orient the surveillance apparatus on them. And that surveillance apparatus is distributed so widely, it feels inescapable that speculative information will be produced about you and attached to your identity online for anyone to find. That is the problem, not oversharing. The end of anonymity is not about people knowing accurate things about you; it’s about enough people who know you being in the micro-gossip business to make you feel unfairly scrutinized and libeled.
Jurgenson points out correctly that “‘Publicity’ on social media needs to be understood fundamentally as an act rife also with its conceptual opposite: creativity and concealment.” It also needs to be understood that of course people who are comfortable with sharing are not exposing their authentic character—even if there were such a thing as an authentic self. The point is that they enjoy constructing that pseudo-celebrity self through social media and feel recognized when they are gossiped about and circulated. But the rest of us are being forced to play their game, on their terms, at an inherent disadvantage.
I don’t want to have to send out reams and reams of disinformation online to “protect” my privacy (which is one of the reasons I am not on Facebook anymore. I thought it was stupid that I only logged in to it to play defense). I don’t want to share only to bury things that have come to embarrass me or prompted responses I don’t like. I don’t believe I have the time or inclination to try to imagine what new creative interpretations and lacunae I can generate with my sharing so as to convey the right impression, cultivate the right sort of fascination with me. I don’t want to be “fascinating.” I don’t want to be “seductive” in the Baudrillardian sense and “create magical and enchanted interest” (to use Jurgenson’s phrase). Others may revel in that fantasy, but I don’t want to have to adopt their code if I can help it.
But it may not be tenable for me to avoid it for much longer. For instance, events I might want to go to will be publicized only through Facebook, and I will end up missing out on everything if I’m not trackable there. Everyone I know will be in the social-media circus tent, and I will have to join them.
Social media confronts us with how little control we have over our public identity, which is put into play and reinterpreted and tossed around while we watch—while all the distortions and gossip gets fed back to us by the automated feedback channels. Some people find this thrilling. Others find it terrible. It’s always been true that we don’t control how we are seen, but at least we could control how much we had to know about it. It’s harder now to be aloof, to be less aware of our inevitable performativity. We are forced instead to fight for the integrity of our manufactured personal brand.
Defining Neoliberalism (24 June 2011)
Forgive me if this post becomes excessively pedantic (a warning I should probably affix to all of them), but I mainly want to try to clarify some stuff for myself here. I tend to use a mishmash of terms -- neoliberalism, post-Fordism, affective labor, immaterial labor, general intellect, etc. -- without as much consistency as I'd like, and I want to do better. (Or perhaps I am trying to bore whatever readership I have into nonexistence, if that hasn't already happened.)
Neoliberalism is a term I didn't know in the 1990s, when I first started reading leftist political theory -- instead the totalizing terms in vogue as far as I could tell were late capitalism, or globalization, or the New Economy, or consumer society. In the past decade, though, neoliberalism has emerged as the go-to term on the left to describe the way capitalism (particularly "post-Fordist" capitalism; that is, "postindustrial" relations of production that are dominated not by factory-organized manufacturing but by services, brands, etc.) has been administered politically more or less since the 1970s. It's potentially confusing to those used to associating anything with the word liberal in it with the left, as neoliberalism is "liberal" in the sense of protecting the "free" functioning of markets, not necessarily extending the liberty of people or guaranteeing them equal opportunities.
It seems that "neoliberalism" gains it currency from Foucault's use of it in a series of lectures for courses he gave in the late 1970s, which only recently have been translated into English and published. That seems as good a place as any to get the bottom of it. In his February 14, 1979, lecture (collected in The Birth of Biopolitics -- not a helpful title, as it dumps another confusing term into the mix), Foucault develops the point that neoliberalism is not a matter of limiting government intervention in society; it is a matter of intervening heavily to protect a particular vision of how markets should work in order to be "fair." Foucault sums up this position this way: "One must govern for the market, rather than because of the market."
The market as we know it is not some natural phenomenon; it is produced through the state's approach to governing. Neoliberal governance regulates markets to make competitiveness their essential feature.
This seems important to the question of why politicians don't care about achieving full employment. So-called welfare policies impede the ideological imperative of letting (constructed) markets legislate everything -- of legitimizing markets so conceived. This in turn legitimates the state. (Jodi Dean highlights this thesis in her summary of these lectures: "The economy produces legitimacy for the state that is its guarantor.")
When neoliberal politicians make demagogic claims about getting government out of people's lives, what they really mean is that they want to implement a social policy of saddling the individual with bearing as much risk as possible so that markets can have pristine price signals. "To the same extent that governmental intervention must be light at the level of economic processes themselves, so must it be heavy when it is a matter of this set of technical, scientific, legal, geographic, let's say, broadly, social factors which now increasingly become the object of governmental intervention," Foucault explains. Controlling social factors "involves an individualization of social policy and individualization through social policy, instead of collectivization and socialization by and in social policy. In short, it does not involve providing individuals with a social cover for risks, but according everyone a sort of economic space within which they can take on and confront risks." Government actively shapes society to posit individualism through the risks we all personally must bear: we know ourselves as discrete individuals because we intimately know how vulnerable we are; then it becomes incumbent upon us to make the best of it, take an entrepreneurial attitude toward that risk in the ostensibly level economic playing field. Another way of putting that: government attempts to craft the subjectivity of its subjects along entrepreneurial lines through various institutional interventions. The way the state wants markets to function, Foucault suggests, determines how we function -- how we are able to conceive of ourselves, our problems and how they can be solved. I don't know, maybe that is sort of self-evident.
Moreover, inequality is not an accident of this system but its deliberate creation. In the following passage (which gives a good flavor of the cumbersomeness of these lectures), Foucault summarizes the logic of the "ordoliberals" -- German neoliberals of the 1930s who Foucault represents as sort of the anti-Frankfurt school -- in rejecting social-democratic principles or even socialized consumption.
That pretty much sums up the neoliberal attitude: Inequality is the same for all, since the market doesn't discriminate -- it produces its inequalities with the same logic that applies to all the players. All the players, in turn, must adopt "neoliberal subjectivity" -- a kind of entrepreneurial desperation that looks for any possible way to mitigate personal risk and gain an individual economic edge that may be necessary for survival.
The main thing for me is this: Neoliberal society is not consumer society. Neoliberalism doesn't posit us all as passive, conformist consumers in thrall to mass culture and doomed to consume the surplus of mass-produced junk. Instead, as Foucault puts it,
Anyway I'm reading an essay (pdf) by Thomas Lemke on these lectures now; may have more to add later.
UPDATE: This passage from Lemke's essay restates what I see as the crucial point of Foucault's analysis, that homo economicus -- the human governed entirely by incentives, human nature as posited by Freakonomics -- is fostered by state power. "neoliberalism admittedly ties the rationality of the government to the rational action of individuals; however, its point of reference is no longer some pre-given human nature, but an artificially created form of behavior. Neo-liberalism no longer locates the rational principle for regulating and limiting the action of government in a natural freedom that we should all
respect, but instead it posits an artificially arranged liberty: in the entrepreneurial and competitive behavior of economic-rational individuals." Neoliberalism creates its own notion of freedom -- freedom of competition -- and disseminates it; subjects internalize it and regulate themselves in accordance with it. It is a state-sanctioned principle around which one can form an identity seemingly in accordance with the society one must live in. Transforming oneself into a personal brand can then appear to be an autonomous rational choice of the individual given the "reality" of the world.
Neoliberalism is a term I didn't know in the 1990s, when I first started reading leftist political theory -- instead the totalizing terms in vogue as far as I could tell were late capitalism, or globalization, or the New Economy, or consumer society. In the past decade, though, neoliberalism has emerged as the go-to term on the left to describe the way capitalism (particularly "post-Fordist" capitalism; that is, "postindustrial" relations of production that are dominated not by factory-organized manufacturing but by services, brands, etc.) has been administered politically more or less since the 1970s. It's potentially confusing to those used to associating anything with the word liberal in it with the left, as neoliberalism is "liberal" in the sense of protecting the "free" functioning of markets, not necessarily extending the liberty of people or guaranteeing them equal opportunities.
It seems that "neoliberalism" gains it currency from Foucault's use of it in a series of lectures for courses he gave in the late 1970s, which only recently have been translated into English and published. That seems as good a place as any to get the bottom of it. In his February 14, 1979, lecture (collected in The Birth of Biopolitics -- not a helpful title, as it dumps another confusing term into the mix), Foucault develops the point that neoliberalism is not a matter of limiting government intervention in society; it is a matter of intervening heavily to protect a particular vision of how markets should work in order to be "fair." Foucault sums up this position this way: "One must govern for the market, rather than because of the market."
The market as we know it is not some natural phenomenon; it is produced through the state's approach to governing. Neoliberal governance regulates markets to make competitiveness their essential feature.
Neoliberal governmental intervention is no less dense, frequent, active, and continuous than in any other system. But what is important is to see what the point of application of these governmental interventions is now. Since this is a liberal regime, it is understood that government must not intervene on effects of the market. Nor must neoliberalism, or neoliberal government, correct the destructive effects of the market on society, and it is this that differentiates it from, let's say, welfare or suchlike policies that we have seen [from the twenties to the sixties]. Government must not form a counterpoint or a screen, as it were, between society and economic processes. It has to intervene on society as such, in its fabric and depth. Basically, it has to intervene on society so that competitive mechanisms can play a regulatory role at every moment and every point in society and by intervening in this way its objective will become possible, that is to say, a general regulation of society by the market.
This seems important to the question of why politicians don't care about achieving full employment. So-called welfare policies impede the ideological imperative of letting (constructed) markets legislate everything -- of legitimizing markets so conceived. This in turn legitimates the state. (Jodi Dean highlights this thesis in her summary of these lectures: "The economy produces legitimacy for the state that is its guarantor.")
When neoliberal politicians make demagogic claims about getting government out of people's lives, what they really mean is that they want to implement a social policy of saddling the individual with bearing as much risk as possible so that markets can have pristine price signals. "To the same extent that governmental intervention must be light at the level of economic processes themselves, so must it be heavy when it is a matter of this set of technical, scientific, legal, geographic, let's say, broadly, social factors which now increasingly become the object of governmental intervention," Foucault explains. Controlling social factors "involves an individualization of social policy and individualization through social policy, instead of collectivization and socialization by and in social policy. In short, it does not involve providing individuals with a social cover for risks, but according everyone a sort of economic space within which they can take on and confront risks." Government actively shapes society to posit individualism through the risks we all personally must bear: we know ourselves as discrete individuals because we intimately know how vulnerable we are; then it becomes incumbent upon us to make the best of it, take an entrepreneurial attitude toward that risk in the ostensibly level economic playing field. Another way of putting that: government attempts to craft the subjectivity of its subjects along entrepreneurial lines through various institutional interventions. The way the state wants markets to function, Foucault suggests, determines how we function -- how we are able to conceive of ourselves, our problems and how they can be solved. I don't know, maybe that is sort of self-evident.
Moreover, inequality is not an accident of this system but its deliberate creation. In the following passage (which gives a good flavor of the cumbersomeness of these lectures), Foucault summarizes the logic of the "ordoliberals" -- German neoliberals of the 1930s who Foucault represents as sort of the anti-Frankfurt school -- in rejecting social-democratic principles or even socialized consumption.
In particular, relative equalization, the evening out of access to consumer goods cannot in any case be an objective. It cannot be an objective in a system where economic regulation, that is to say, the price mechanism, is not obtained through phenomena of equalization but through a game of differentiations which is characteristic of every mechanism of competition and which is established through fluctuations that only perform their function and only produce their regulatory effects On condition that they are left to work, and left to work through differences. In broad terms, for regulations to take effect there must be those who work and those who don't, there must be big salaries and small salaries and also prices must rise and fall. Consequently, a social policy with the objective of even a relative equalization, even a relative evening out, can only be anti-economic. Social policy cannot have equality as its objective. On the contrary, it must let inequality function and, I no longer recall who it was, I think it was Röpke, who said that people complain of inequality, but what does it mean? "Inequality," he said, "is the same for all." This formula may seem enigmatic, but it can be understood when we consider that for the ordoliberals the economic game, along with the unequal effects it entails, is a kind of general regulator of society that nearly everyone has to accept and abide by.
That pretty much sums up the neoliberal attitude: Inequality is the same for all, since the market doesn't discriminate -- it produces its inequalities with the same logic that applies to all the players. All the players, in turn, must adopt "neoliberal subjectivity" -- a kind of entrepreneurial desperation that looks for any possible way to mitigate personal risk and gain an individual economic edge that may be necessary for survival.
The main thing for me is this: Neoliberal society is not consumer society. Neoliberalism doesn't posit us all as passive, conformist consumers in thrall to mass culture and doomed to consume the surplus of mass-produced junk. Instead, as Foucault puts it,
The society regulated by reference to the market that the neo-liberals are thinking about is a society in which the regulatory principle should not be so much the exchange of commodities as the mechanisms of competition.... This means that what is sought is not a society subject to the commodity effect, but a society subject to the dynamic of competition. Not a supermarket society, but an enterprise society. The homo economicus sought after is not the man of exchange or man the consumer; he is the man of enterprise and production.Neoliberalism makes consuming into an personal, private enterprise. We are obliged to view it as everybody's business, because it is essentially our own.
Anyway I'm reading an essay (pdf) by Thomas Lemke on these lectures now; may have more to add later.
UPDATE: This passage from Lemke's essay restates what I see as the crucial point of Foucault's analysis, that homo economicus -- the human governed entirely by incentives, human nature as posited by Freakonomics -- is fostered by state power. "neoliberalism admittedly ties the rationality of the government to the rational action of individuals; however, its point of reference is no longer some pre-given human nature, but an artificially created form of behavior. Neo-liberalism no longer locates the rational principle for regulating and limiting the action of government in a natural freedom that we should all
respect, but instead it posits an artificially arranged liberty: in the entrepreneurial and competitive behavior of economic-rational individuals." Neoliberalism creates its own notion of freedom -- freedom of competition -- and disseminates it; subjects internalize it and regulate themselves in accordance with it. It is a state-sanctioned principle around which one can form an identity seemingly in accordance with the society one must live in. Transforming oneself into a personal brand can then appear to be an autonomous rational choice of the individual given the "reality" of the world.
Labels:
neoliberalism
Jobless recoveries and productivity surges (22 June 2011)
Does the business community care about achieving full employment? Why would it? With a slack labor market, companies can drive workers harder, treat them poorly and cut their benefits as ruthlessly as if they were Greek citizens and the employees have no recourse. That's the essence of the recent Mother Jones article by Monika Bauerlein and Clara Jeffery on the "great speedup" -- how companies are using improving productivity (a.k.a. working people much harder) to trim their payrolls and improve profits.
Why aren't we outraged? The authors suggest the American ideology of hard work and bootstrapping makes us particularly susceptible to drawing our self-esteem from overwork (maybe we all need to be reading Lafargue), though the weak labor-protection laws also play a part in helping corporations get away with limiting vacation time, sick pay, parental leave and so on. They point out that "productivity" is more or less a euphemism for extracting more labor from workers in shorter time periods, glossing over the reality of bosses sweating employees by evoking some magical technological fix. Sometimes we are lulled into thinking "productivity improvements" means people work just as hard as they ever were but more stuff is made, thanks to magic machines. Machines often permit workers to do more faster, but some innovations -- communication technology, for instance -- serve to conceal the extension of the working day and the intensiveness of workplace discipline. And in all cases, technological accelerants have a corresponding effect on workers, who must keep up with the machines and may experience increased stress as the managerial pressure mounts. The authors argue that we should organize and fight for more respite from this pressure in the form of increased vacation, work-sharing programs, and the like. Implicit is a sanction of the idea of a productivity freeze, anathema to the traditional economic view that quality of life improves with productivity. Does that calculus shift when productivity yields persistent unemployment and corporate looting of labor's share of the gains from productivity? Perhaps capitalists believe fundamentally that labor doesn't deserve a share -- that seems to be the de facto position of the Republican party.
At his blog, Jared Bernstein has more on the slack labor market, with charts intended to demonstrate that "the diminished ability to bargain for their fair share of productivity growth is a major factor in the productivity/income split. You may think I’m talking unions here, but I’m not. I’m talking high unemployment." Lane Kenworthy, in his recent has a post about jobless recoveries, argues that data from recent recessions reveals that the "pattern of the 2000-07 business cycle may indicate a fundamental shift in employer practices, with greater reluctance to hire and eagerness to fire," citing this 2010 paper (pdf) by Robert Gordon about the apparent demise of Okun's Law, which associates growth with falling unemployment. Gordon claims that the data show that "the concept of a procyclical 'productivity shock' and 'technology shock' is no longer relevant, except in reference to particular major changes in the relative price of oil or other commodities" and surveys some of the other explanations for "structural labor‐market change." This one was particularly comforting:
But the overarching point of all these articles is that economic growth is not benefiting society as a whole but a smaller class of capitalists and rentiers -- and this appears to be by design, if you buy the story outlined here by Robert Reich (short video). The top 1% of the income distribution controls government, forcing it to pursue policies of austerity rather than those designed to improve employment, because the top 1%, shortsightedly or not, doesn't care about unemployed people, and in fact, can exploit them more ruthlessly the more desperate they are. And politicians don't care about them because constituents are easily demagogued into agreeing with the paymaster's political program. There is no apparent possible consensus, because the 1%ers behave as though improving society for all saps their wealth in zero-sum fashion. Meanwhile, the quality of life for the middle class erodes, government services are cut indiscriminately, and we slouch toward becoming a banana republic.
In all the chatter about our "jobless recovery," how often does someone explain the simple feat by which this is actually accomplished? US productivity increased twice as fast in 2009 as it had in 2008, and twice as fast again in 2010: workforce down, output up, and voilá! No wonder corporate profits are up 22 percent since 2007, according to a new report by the Economic Policy Institute. To repeat: Up. Twenty-two. Percent.
Why aren't we outraged? The authors suggest the American ideology of hard work and bootstrapping makes us particularly susceptible to drawing our self-esteem from overwork (maybe we all need to be reading Lafargue), though the weak labor-protection laws also play a part in helping corporations get away with limiting vacation time, sick pay, parental leave and so on. They point out that "productivity" is more or less a euphemism for extracting more labor from workers in shorter time periods, glossing over the reality of bosses sweating employees by evoking some magical technological fix. Sometimes we are lulled into thinking "productivity improvements" means people work just as hard as they ever were but more stuff is made, thanks to magic machines. Machines often permit workers to do more faster, but some innovations -- communication technology, for instance -- serve to conceal the extension of the working day and the intensiveness of workplace discipline. And in all cases, technological accelerants have a corresponding effect on workers, who must keep up with the machines and may experience increased stress as the managerial pressure mounts. The authors argue that we should organize and fight for more respite from this pressure in the form of increased vacation, work-sharing programs, and the like. Implicit is a sanction of the idea of a productivity freeze, anathema to the traditional economic view that quality of life improves with productivity. Does that calculus shift when productivity yields persistent unemployment and corporate looting of labor's share of the gains from productivity? Perhaps capitalists believe fundamentally that labor doesn't deserve a share -- that seems to be the de facto position of the Republican party.
At his blog, Jared Bernstein has more on the slack labor market, with charts intended to demonstrate that "the diminished ability to bargain for their fair share of productivity growth is a major factor in the productivity/income split. You may think I’m talking unions here, but I’m not. I’m talking high unemployment." Lane Kenworthy, in his recent has a post about jobless recoveries, argues that data from recent recessions reveals that the "pattern of the 2000-07 business cycle may indicate a fundamental shift in employer practices, with greater reluctance to hire and eagerness to fire," citing this 2010 paper (pdf) by Robert Gordon about the apparent demise of Okun's Law, which associates growth with falling unemployment. Gordon claims that the data show that "the concept of a procyclical 'productivity shock' and 'technology shock' is no longer relevant, except in reference to particular major changes in the relative price of oil or other commodities" and surveys some of the other explanations for "structural labor‐market change." This one was particularly comforting:
Firms can reduce employment and hours with impunity if they no longer value the human capital embodied in their experienced workers and have confidence that via the internet they can find replacement employees with equivalent skills, and an ability to learn rapidly the necessary specific human capital to function well on the job.In yesterday's post I was trying to make the case that internships are an exact reflection of this growing confidence. When jobs become a matter of harnessing the general intellect of the multitude, the individual nodes of the rhizome are interchangeable. Specific skills have become less important than general malleability, so there is no need to keep the same people around or show loyalty to employees to build organizational capital. More and more of us become, to use the term Gordon adopts, "disposable workers." The internet is exacerbating this process not only by making it easier to recruit fresh meat, but also by allowing capital to subsume more and more of everyday life, rendering more of production social (that is, a by-product of subjectivity formation and sociality in general). This polarizes the labor market further into superstars with irreplaceable skills in producing affects (the celebrities whose lives make the other commodities they associate with valuable, and the entertainers who more dependably stimulate us) and the rest of us proles who make up the wisdom of crowds.
But the overarching point of all these articles is that economic growth is not benefiting society as a whole but a smaller class of capitalists and rentiers -- and this appears to be by design, if you buy the story outlined here by Robert Reich (short video). The top 1% of the income distribution controls government, forcing it to pursue policies of austerity rather than those designed to improve employment, because the top 1%, shortsightedly or not, doesn't care about unemployed people, and in fact, can exploit them more ruthlessly the more desperate they are. And politicians don't care about them because constituents are easily demagogued into agreeing with the paymaster's political program. There is no apparent possible consensus, because the 1%ers behave as though improving society for all saps their wealth in zero-sum fashion. Meanwhile, the quality of life for the middle class erodes, government services are cut indiscriminately, and we slouch toward becoming a banana republic.
Labels:
fiscal policy,
new elites,
politics,
unemployment
Thoughts on Ross Perlin's 'Intern Nation' (21 June 2011)
Many of the reviews of Ross Perlin's recent book Intern Nation seem to be marked by a relief that someone else has taken the trouble to acknowledge (i.e. done all they can realistically be expected to do to solve) an obvious problem -- that interns are like strike-breakers before the fact, except they work not for compromisingly low wages so much as for the privilege of sucking up to bosses and building all-important white-collar social capital (here are two reviews that buck the trend). Employers in certain fields, ones that can establish a nominally plausible but essentially simoniac relationship with universities, can take advantage of loopholes in the U.S. labor protection laws to eliminate full-time positions and hire students as underpaid temps to do the work.
Once, this was restricted mainly to the "glamor industries" -- a situation well-condemned in a Baffler article from 1999 by Jim Frederick (sadly not online, Google books preview here). In the past decade, internships have spread throughout the postindustrial sectors of the economy, so that a period of corvée labor in service to the managerial overlords in corporate America is now simply programmed into the college experience for most privileged kids. Many intern programs are a racket geared not toward democratizing economic opportunity but conserving privilege through nepotistic mechanisms, and the ones that aren't about ensuring that the children of elites meet suitable mentors are typically schemes for offloading as much menial work as possible onto unpaid laborers who have no recourse to legal protections.
Not that anyone should have been surprised by this situation. If you have ever worked in offices or even watched TV shows about working in offices, how can you be surprised that internships aren't especially educational? Or that companies are eager to gain leverage over workers however they can? Does anyone think businesses would do anything but exploit interns as long as they can get away with it? And does anyone think labor-law enforcement is anywhere close to being a priority for the series of neoliberal administrations the U.S. has endured since Reagan? Reviewers least of all should have been surprised, as most of the publications use interns unapolegetically and some of the reviewers admit to having been interns themselves. Presumably they see their own experience as an exception to the general rule of exploitation and the conservation of privilege to those with connections. Internships in general might clearly seem exploitative, eviscerating hard-earned rights for workers, but one's own internship always isn't so bad and gives a necessary leg up, lets you meet some really nice, really useful people.
But if you find internships problematic enough to hail a book like this, then you should be concerned about the collective action problem that perpetuates the system -- the individualistic ethos of interns that prompts them to participate in an obviously corrupt system so that they can get the edge on their competitors in the labor market. Such an attitude is the outcome of an extended effort by employers and universities to normalize the abuse of internships as pseudo-educational training programs (in his book, Perlin effectively traces this), which in turn is part of the larger sweep of neoliberal reform, which is intended to accustom individual workers to a dog-eat-dog capitalist world and get them to shoulder more and more social risk personally. Under neoliberalism, individuals need to live as though there is no social safety net, that they are perpetually vulnerable to bad fortune and thus accrue as much unfair advantage over others as they can as a form of private insurance. Internships are merely among the first steps in this dismal education process.
Perhaps the intended audience for the book is college students who might consider becoming an intern themselves. Such an audience would be appropriate, because the most straight-forward way to derail internship abuses would be to incite a general refusal among college-age workers to accept employment on those terms. Given the competitive realities of the labor market, however, such a turn of events seems extremely unlikely. Instead, internships acclimate those entering the workforce to the realities of at-will employment, which requires that employees perpetually exceed expectations and assume personal responsibility for developing the necessary strategies for success within the firm. To train interns in a set of procedures agreed upon in advance would undermine the whole purpose of internships for most enterprises; internships serve as a elaborate test to see if the interns can invent ways to make themselves useful. If you have to tell them how to be useful, they are not being trained in the realities of the workplace, which necessarily puts a premium on self-starting and solving problems for oneself in the course of the working day (which contributes to overall efficiency and productivity). Thus rote obedience is often less important for interns than being the sort of worker who can intuit ways to be useful through careful observation and open-ended obsequiousness. With the overriding imperative of making a good impression guiding them, successful interns master ways of sussing out the office power-brokers, sidling up to bosses, getting themselves heard in meetings despite their novitiate status, and so on. They create their own opportunities and are paid in goodwill (much like bloggers).
Perlin recognizes the place of internships in the "new spirit of capitalism" (to use Boltanski and Chiapello's phrase as he does), arguing that they represent the evolution of training programs to suit post-Fordist work conditions: "What structured training programs were to the bureaucratic firms of the mid-20th century, internships may well be to the new network capitalism of firms dealing with intangible goods." Modern production in many sectors has less to do with learning industry-specific skills that require focused apprenticeship. Instead, much of production has become social, as Tiziana Terranova explains in this talk. That is to say, production is less a matter of skilled labor than of what is sometimes called immaterial labor -- the ability to collaborate, cooperate and innovate, often in consumption processes, to enrich the symbolic value of commodities. Perlin notes that this serves to justify office internships as de facto training, no matter how menial the intern's actual jobs seem on the surface. "To an economist, the logic goes like this: if white-collar firms are increasingly organized around their intangible human or organizational capital—whether it be their brand or their 'culture of innovation'— then this is what interns have to learn, just as much as any specific skill set."
That sounds a lot like what post-autonomist theory has to say about post-Fordist labor conditions. In Grammar of the Multitude, Paolo Virno describes this shift in labor from away from skills and toward congeniality and what he calls "virtuosity" -- work as ambitiously performative. Often this is a matter of innovating your own work processes, writing your own job description (as the management gurus might put it), finding opportunities to schmooze and impress higher-ups with useful ideas. Virno:
So it is that internships habituate novice employees to the conditions of the contemporary workplace, where job-specific skills are less relevant than the willingness to cooperate totally with co-workers and managers and make a show of one's perpetual availability and malleability. Much of career training for the postindustrial labor market involves learning to be a weasel, to be one of the cheese-hunting rats of the megaselling 1990s management tract Who Moved My Cheese? Interns learn to become eager self-exploiters. They accept that employers owe no such entitlements as loyalty to employees (superior employees earn favors; all workers shouldn't unfairly be granted rights uniformly); instead the constant demand for self-reinvention is inscribed as an opportunity for self-development, as an investment in one's own "human capital," though ultimately for the company's benefit.
Or to put it in Marxist terms, interns are taught to assimilate to the "general intellect," the wellspring of social productivity that Marx prophesized in the Grundrisse's "Fragment on Machines," and deploy it for capital's advantage. This manifests as an eagerness to show what one can do, to show what one is made of, to show how eager one is -- to show, in short, the quality of one's subjectivity in total and how well that total being can suits the company's needs. "Labor as subjectivity," as Virno, citing Marx, puts it. Interns are given the opportunity to prove that they are well-socialized and have the habitus of the go-getting office worker (or the chance to absorb that habitus quickly). Interns thus develop an entrepreneurial attitude toward their own capabilities and their everyday lives, scouring it all for opportunities to exploit for an employer's sake. Individuals not fortunate enough to inherit it are responsible for equipping themselves with "human capital."
But does this constitute real training? Perlin thinks it doesn't, which would make internships illegal. But enforcing that law wouldn't change the underlying conditions that require that workers adopt such a habitus, which means that interns' legality is something of a sideshow. Whether you are an intern or a mandatory freelancer, the situation is pretty much the same -- you have to constantly hustle. As Perlin notes, the need to supply oneself with human capital (rather than have it be a part of what society supplies to all its members out of an egalitarianism of opportunity) exemplifies how a broadening precarity affects the middle class, rendering its isolated members as responsible for outfitting themselves for social survival as the poor have long been. When internships fall to the nonelites, it still reflects this basic structure, that the intern should be willing to work for nothing because it is an investment in their future. Likewise, secondary education is entirely instrumental, an investment in one's future earning potential rather than an enrichment of one's human capabilities. Though skeptical of the human-capital thesis, Perlin tends to accept this view tacitly, seeing education basically as a mechanism for generating a wage premium.
Perlin's book concludes with a somewhat wide-eyed "Intern Bill of Rights" -- a series of propositions for creating a "common standard by which to evaluate and improve internships." The list seems to suggest that much of the problem with internships is semantic -- the word itself sugarcoats the reality of exploitation with educational pretenses. But why attempt to stabilize a definition of the word? It would seem more constructive to sully the term completely, along with the other euphemisms of the internship racket, like "human capital" and "situated learning," that serve only as tools of opportunistic obfuscation. Under the conditions Perlin advocates in this list, interns are part-time workers, plain and simple, and should be labeled as such. Those in positions that fail to meet those standards he elucidates are best understood as illegals, as scabs, regardless of how prestigious or entitled their upbringing has been. Fighting internships means fighting special favors, denying the personal touches, the exceptionalism of elite colleges -- it means confronting privilege head on and exposing it with contempt. But what we tend to do instead is take advantage of new tools to measure our prestige and figure out ways to leverage it into enduring privileges for ourselves and our friends.
Once, this was restricted mainly to the "glamor industries" -- a situation well-condemned in a Baffler article from 1999 by Jim Frederick (sadly not online, Google books preview here). In the past decade, internships have spread throughout the postindustrial sectors of the economy, so that a period of corvée labor in service to the managerial overlords in corporate America is now simply programmed into the college experience for most privileged kids. Many intern programs are a racket geared not toward democratizing economic opportunity but conserving privilege through nepotistic mechanisms, and the ones that aren't about ensuring that the children of elites meet suitable mentors are typically schemes for offloading as much menial work as possible onto unpaid laborers who have no recourse to legal protections.
Not that anyone should have been surprised by this situation. If you have ever worked in offices or even watched TV shows about working in offices, how can you be surprised that internships aren't especially educational? Or that companies are eager to gain leverage over workers however they can? Does anyone think businesses would do anything but exploit interns as long as they can get away with it? And does anyone think labor-law enforcement is anywhere close to being a priority for the series of neoliberal administrations the U.S. has endured since Reagan? Reviewers least of all should have been surprised, as most of the publications use interns unapolegetically and some of the reviewers admit to having been interns themselves. Presumably they see their own experience as an exception to the general rule of exploitation and the conservation of privilege to those with connections. Internships in general might clearly seem exploitative, eviscerating hard-earned rights for workers, but one's own internship always isn't so bad and gives a necessary leg up, lets you meet some really nice, really useful people.
But if you find internships problematic enough to hail a book like this, then you should be concerned about the collective action problem that perpetuates the system -- the individualistic ethos of interns that prompts them to participate in an obviously corrupt system so that they can get the edge on their competitors in the labor market. Such an attitude is the outcome of an extended effort by employers and universities to normalize the abuse of internships as pseudo-educational training programs (in his book, Perlin effectively traces this), which in turn is part of the larger sweep of neoliberal reform, which is intended to accustom individual workers to a dog-eat-dog capitalist world and get them to shoulder more and more social risk personally. Under neoliberalism, individuals need to live as though there is no social safety net, that they are perpetually vulnerable to bad fortune and thus accrue as much unfair advantage over others as they can as a form of private insurance. Internships are merely among the first steps in this dismal education process.
Perhaps the intended audience for the book is college students who might consider becoming an intern themselves. Such an audience would be appropriate, because the most straight-forward way to derail internship abuses would be to incite a general refusal among college-age workers to accept employment on those terms. Given the competitive realities of the labor market, however, such a turn of events seems extremely unlikely. Instead, internships acclimate those entering the workforce to the realities of at-will employment, which requires that employees perpetually exceed expectations and assume personal responsibility for developing the necessary strategies for success within the firm. To train interns in a set of procedures agreed upon in advance would undermine the whole purpose of internships for most enterprises; internships serve as a elaborate test to see if the interns can invent ways to make themselves useful. If you have to tell them how to be useful, they are not being trained in the realities of the workplace, which necessarily puts a premium on self-starting and solving problems for oneself in the course of the working day (which contributes to overall efficiency and productivity). Thus rote obedience is often less important for interns than being the sort of worker who can intuit ways to be useful through careful observation and open-ended obsequiousness. With the overriding imperative of making a good impression guiding them, successful interns master ways of sussing out the office power-brokers, sidling up to bosses, getting themselves heard in meetings despite their novitiate status, and so on. They create their own opportunities and are paid in goodwill (much like bloggers).
Perlin recognizes the place of internships in the "new spirit of capitalism" (to use Boltanski and Chiapello's phrase as he does), arguing that they represent the evolution of training programs to suit post-Fordist work conditions: "What structured training programs were to the bureaucratic firms of the mid-20th century, internships may well be to the new network capitalism of firms dealing with intangible goods." Modern production in many sectors has less to do with learning industry-specific skills that require focused apprenticeship. Instead, much of production has become social, as Tiziana Terranova explains in this talk. That is to say, production is less a matter of skilled labor than of what is sometimes called immaterial labor -- the ability to collaborate, cooperate and innovate, often in consumption processes, to enrich the symbolic value of commodities. Perlin notes that this serves to justify office internships as de facto training, no matter how menial the intern's actual jobs seem on the surface. "To an economist, the logic goes like this: if white-collar firms are increasingly organized around their intangible human or organizational capital—whether it be their brand or their 'culture of innovation'— then this is what interns have to learn, just as much as any specific skill set."
That sounds a lot like what post-autonomist theory has to say about post-Fordist labor conditions. In Grammar of the Multitude, Paolo Virno describes this shift in labor from away from skills and toward congeniality and what he calls "virtuosity" -- work as ambitiously performative. Often this is a matter of innovating your own work processes, writing your own job description (as the management gurus might put it), finding opportunities to schmooze and impress higher-ups with useful ideas. Virno:
the tasks of a worker or of a clerk no longer involve the completion of a single particular assignment, but the changing and intensifying of social cooperation.... From the beginning, one resource of capitalistic enterprise has been the so-called “misappropriation of workers’ know how.” That is to say: when workers found a way to execute their labor with less effort, taking an extra break, etc., the corporate hierarchy took advantage of this minimal victory, knowing it was happening, in order to modify the organization of labor. In my opinion, a significant change takes place when the task of the worker or of the clerk to some extent consists in actually finding, in discovering expedients, “tricks,” solutions which ameliorate the organization of labor. In the latter case, workers’ knowledge is not used on the sly but it is requested explicitly; that is to say, it becomes one of the stipulated working assignments. The same change takes place, in fact, with regards to cooperation: it is not the same thing if workers are coordinated de facto by the engineer or if they are asked to invent and produce new cooperative procedures. Instead of remaining in the background, the act of cooperating, linguistic integration, comes to the very foreground.This is essentially what internships are about. As Perlin notes, "the whole point of internships" is that the untrained worker must "come in under the radar, on the cheap, just a bright kid, and then suddenly prove yourself, become somebody worth having." Internships are precisely about teaching interns that they must train themselves and prove their fitness through quick adaptability. The point of most contemporary internships is not what sort of work gets assigned to interns and what specific training they receive from mentors but rather how quickly they learn to create productive work for themselves that their employers can then profit from. Many interns are tacitly expected to figure out how to go beyond what they have been explicitly been directed to do and pitch their own ideas for what they can do.
So it is that internships habituate novice employees to the conditions of the contemporary workplace, where job-specific skills are less relevant than the willingness to cooperate totally with co-workers and managers and make a show of one's perpetual availability and malleability. Much of career training for the postindustrial labor market involves learning to be a weasel, to be one of the cheese-hunting rats of the megaselling 1990s management tract Who Moved My Cheese? Interns learn to become eager self-exploiters. They accept that employers owe no such entitlements as loyalty to employees (superior employees earn favors; all workers shouldn't unfairly be granted rights uniformly); instead the constant demand for self-reinvention is inscribed as an opportunity for self-development, as an investment in one's own "human capital," though ultimately for the company's benefit.
Or to put it in Marxist terms, interns are taught to assimilate to the "general intellect," the wellspring of social productivity that Marx prophesized in the Grundrisse's "Fragment on Machines," and deploy it for capital's advantage. This manifests as an eagerness to show what one can do, to show what one is made of, to show how eager one is -- to show, in short, the quality of one's subjectivity in total and how well that total being can suits the company's needs. "Labor as subjectivity," as Virno, citing Marx, puts it. Interns are given the opportunity to prove that they are well-socialized and have the habitus of the go-getting office worker (or the chance to absorb that habitus quickly). Interns thus develop an entrepreneurial attitude toward their own capabilities and their everyday lives, scouring it all for opportunities to exploit for an employer's sake. Individuals not fortunate enough to inherit it are responsible for equipping themselves with "human capital."
But does this constitute real training? Perlin thinks it doesn't, which would make internships illegal. But enforcing that law wouldn't change the underlying conditions that require that workers adopt such a habitus, which means that interns' legality is something of a sideshow. Whether you are an intern or a mandatory freelancer, the situation is pretty much the same -- you have to constantly hustle. As Perlin notes, the need to supply oneself with human capital (rather than have it be a part of what society supplies to all its members out of an egalitarianism of opportunity) exemplifies how a broadening precarity affects the middle class, rendering its isolated members as responsible for outfitting themselves for social survival as the poor have long been. When internships fall to the nonelites, it still reflects this basic structure, that the intern should be willing to work for nothing because it is an investment in their future. Likewise, secondary education is entirely instrumental, an investment in one's future earning potential rather than an enrichment of one's human capabilities. Though skeptical of the human-capital thesis, Perlin tends to accept this view tacitly, seeing education basically as a mechanism for generating a wage premium.
Perlin's book concludes with a somewhat wide-eyed "Intern Bill of Rights" -- a series of propositions for creating a "common standard by which to evaluate and improve internships." The list seems to suggest that much of the problem with internships is semantic -- the word itself sugarcoats the reality of exploitation with educational pretenses. But why attempt to stabilize a definition of the word? It would seem more constructive to sully the term completely, along with the other euphemisms of the internship racket, like "human capital" and "situated learning," that serve only as tools of opportunistic obfuscation. Under the conditions Perlin advocates in this list, interns are part-time workers, plain and simple, and should be labeled as such. Those in positions that fail to meet those standards he elucidates are best understood as illegals, as scabs, regardless of how prestigious or entitled their upbringing has been. Fighting internships means fighting special favors, denying the personal touches, the exceptionalism of elite colleges -- it means confronting privilege head on and exposing it with contempt. But what we tend to do instead is take advantage of new tools to measure our prestige and figure out ways to leverage it into enduring privileges for ourselves and our friends.
Subscribe to:
Posts (Atom)