Showing posts with label social media. Show all posts
Showing posts with label social media. Show all posts

Thursday, February 2, 2012

Authenticity Issues and the New Intimacies (31 Jan 2012)

Tom Slee recently began posting about MIT sociologist Sherry Turkle's recent book Alone Together. Turkle, in some ways, is the chief theorist of digital dualism; her books The Second Self (1984) and Life on the Screen (1995) helped set the terms for talking about virtual selves in cyberspace as projections of some real self that exists outside it and is deleteriously affected by these interactions. Those books are more than a little dated, but in a way that makes their arguments more striking. Just substitute Facebook for MUD in Life on the Screen; after all, what is Facebook if not a MUD in which you create and play the character of yourself.

Turkle's basic point was that computers change the people who use them (they are not neutral tools). Users begin to transfer programming metaphors to their interactions with people and psychological metaphors to the behavior of machines, and so on. This leakage between our conceptions of humans and nonhuman objects for Turkle threatens the integrity of the category of the human; reading her books sometimes feels like reading the anti-Donna Haraway. (I won't even try to relate Turkle to OOO.) Here's a typical declaration, from the introduction to the 20th anniversary edition off The Second Self:
we stand on the boundary between the physical and virtual. And increasingly, we stand on the boundary between worlds we understand through transparent algorithm and worlds we understand by manipulating opaque simulation.

That boundary seems very important to Turkle; her main concern often seems to be holding on to a firm definition of the "real" and bemoaning the encroachment of simulations on the preserves of genuine human experience. This deeply conservative standpoint stems from her theoretical grounding in Freud. In Alone Together she declares (in a quote Slee also highlights), "I am a psychoanalytically trained psychologist. Both by temperament and profession, I place high value on relationships of intimacy and authenticity." But what can be the basis for determining what is authentic? It can often seem arbitrary, even in Turkle's own anecdotes. (Her psychoanalytical background is surely what makes Turkle so interested in stories, the aspect of Alone Together that Slee focuses on in his post. The tendency of her examples to undermine themselves or dissolve into ambiguity is part of what he finds compelling about them.)

It often appears that Turkle is working with a preconceived, somewhat ahistorical notion of what identity and subjectivity must consist of, which leads her to take a condemnatory tone toward what her research unearths about human-machine hybridity. This tone tends to curtail the analysis -- serve as its own conclusion. She can come across as a "What about the children?!" concern troll, deploying all sorts of rhetorical questions to try to persuade us of the psychological harm of technology's current drift. ("In all of this, there is a nagging question: Does virtual intimacy degrade our experience of the other kind and, indeed, of all encounters, of any kind?" "Are we comfortable with virtual environments that propose themselves not as places for recreation but as new worlds to live in?" "Why am I talking to a robot? and Why do I want this robot to like me?")

Turkle sometimes seems to worry that "real" identity is being thwarted by online sociality, which fosters some sort of inauthentic identity. But I think the concern with authenticity is an expression of nostalgia for a period where it was easier to believe that one had total control over the development of one's personality and that identity came from within. Networked sociality has made that much harder to sustain, and the ideological anchors for identity have also begun to change with the times (hence the legitimization of the data self). Authenticity is a pressing personal issue now for many not because it has been suddenly lost (it's always already irrevocable), but because it has become one of the terms in the accounting system for a different form of mediated selfhood. "Authenticity" is another metric in the attention economy, measuring how believable one is to oneself in the process of broadcasting oneself. I'd expect that soon "authenticity" will be a literal metric, measuring the data trail one produces at one point of time with some earlier point to detect the degree of drift. (I know I should probably spin that out into an analysis of Lana Del Rey, but I'm thinking I'll just let that ship sail without me.)

In Alone Together, Turkle fuses a section about sociable robots with a section about social media usage to basically argue this: social media accustom us to instrumentalized friendship, and once we are used to that, we are open to crypto-relationships with robots (the "new intimacies"), since they offer nothing more than instrumental value. Since we don't want the "drama" of reciprocal real-time sociality anyway, there is basically no difference from our point of view between relating to another person and a robot. They are both merely mirrors for ourselves anyway. To a narcissist, every other person is always already a robot.

My favorite of Turkle's anecdotal subjects is "Brad," who talks about quitting Facebook and is highly articulate about the suffocating, stultifying reflexivity that social media induce. "Online life is about premeditation," he tells Turkle. This is also true about any concern for authenticity -- it involves a sort of deciding in advance what sort of spontaneous behavior to indulge in. We try to judge ourselves in terms of some ideal that is not supposed to be an ideal at all but one's natural, revealed self. But there is nothing natural about checking in with yourself on how natural you are being. Direct experience of one's authentic self is impossible, once it's conceived as something that can be known in the abstract -- as something fungible, malleable, deployable -- rather than as a process, a mode of action. Assessing one's authenticity, therefore is impossible too. It either makes no sense to ask (everything I do, I'm doing, and is thus authentic to me by definition) or involves paradoxes of reflexivity and observer bias (every time I try to see myself doing authentic things, my self-consciousness changes my behavior). Nevertheless, social media set themselves up (or, to be fair, are taken as) forums for authentic-self-assessment -- one' can't judge the authenticity of the unmediated self in real time, but one can certainly evaluate the authenticity of one's online profile or the impression others seem to have of you. That is the narcissistic trap social media sets out for us.

But it is a trap also to imagine one can have some sort of direct experience of others, as if you could see the "real" person outside social media. We can't access the other's consciousness; it is always an objective performance from the outside. Nobody can ever show you their "real" self.

Slee brings up one of Turkle's anecdotes that gets at a different way of viewing things outside of authenticity:
Visiting Japan in the early 1990s, Turkle heard tales of adult children who, too distant and too busy to visit their aging and infirm parents, hired actors to visit in their stead, playing the part of the adult child. What's more, the parents appreciated and enjoyed the gesture. It's slightly shocking to western sensibilities, but once we hear a little more context it becomes more understandable.
First, the actors are not (in all cases, at least) a deception: the parents recognize them for what they are. Yet the parents "enjoyed the company and played the game". In Japan, being elderly is a role, being a child is a role, and parental visits have a strong dose of ritual to them: the recital of scripts by each party. While the child may not be able to act out their role, at least this way the parent gets to enact theirs, and so to reinforce their identity as an elderly, respected person.
Traditional rituals of social interaction allow people a certain measure of ontological security with regard of their place in society and within familial networks. That relatively secure identity still had to be performed to be felt, but the performance is explicitly understood as a performance. The reality of the identity is guaranteed by the rootedness of traditions. Such role-playing doesn't fit with the ideology of individual existential freedom and the glories of unrestricted personal consumer choice and living like a stranger among strangers in urban settings and so forth. And while I certainly wouldn't want to be saddled with that sort of ritualized social life and have an identity assigned to me even more on the basis of the circumstances of my birth, I do wonder what it would be like to feel intrinsically the basis of my identity was secure and my "authenticity" could never be sullied by missteps of taste.

In social media there is a material basis for an alternative to ingrained tradition in anchoring identity; a networked self could have some solidity that renders the performative nature of identity operate beyond questions of genuineness or authenticity. From a resister's perspective, this all looks as odd and mechanical as the idea of sending actors to love your parents for you. But adopters can take solace in sending out their "Profile" (to use Nathan Jurgenson's term for aggrgate online presence) to perform our cemented identity within various social networks. Once you accept that Facebook's data collection roots you, you are "free" to be absent from social rituals but be present nonetheless. Welcome to the new intimacy.

What's dangerous about this is not that it has ruined some previous form of intimacy that was especially precious. The problem is that we believe that we construct this social-media identity autonomously and that it is therefore our responsibility, our fault if it's limited. The social-media companies have largely succeeded in persuading users of their platforms' neutrality. What we fail to see is that these new identities are no less contingent and dictated to us then the ones circumscribed by tradition; only now the constraints are imposed by for-profit companies in explicit service of gain.

Monday, January 30, 2012

Everyone's a modernist (26 Jan 2012)

Irving Howe's 1967 Commentary essay "The Culture of Modernism" (here's a gated link for all you Commentary subscribers in the audience) is the sort of thing I usually don't have much time for: a lot of fretting about nomenclature (what is modernism?), a preoccupation with literature qua literature, some contempt for the contemporary generation's aesthetic shortcomings masquerading as concern for the future of humanism, and so on — the Great Critics doing Criticism. But I found it interesting that much of what Howe argues modernists were striving for is what internet culture, in the eyes of its boosters anyway, has achieved. Howe writes, "Modernism keeps approaching — sometimes even penetrating — the limits of solipsism, the view expressed by the German poet Gottfried Benn when he writes that 'there is no outer reality, there is only human consciousness, constantly building, modifying, rebuilding new worlds out of its own creativity.' " That sounds a lot like a paean to virtuality, to humans freed from biological constraints to exist as pure (digital) expression. When critics say online sociality is solipsistic, they don't recognize that we must "penetrate" solipsism to reach some sort of apotheosis of intersubjectivity. The modernists paved the way, responding to cultural sterility (their "end of history") with unremitting commitment to innovation for its own sake. Howe cites Lukács (though it may as well have been Schumpeter), claiming that modernists are "committed to ceaseless change, turmoil and re-creation." It actually sounds a bit like neoliberal economics.

Later, Howe declares that:
In modernist literature, one finds a bitter impatience with the whole apparatus of cognition and the limiting assumption of rationality. Mind comes to be seen as an enemy of vital human powers. Culture becomes disenchanted with itself, sick over its endless refinements. There is a hunger to break past the bourgeois proprieties and self-containment of culture, toward a form of absolute personal speech, a literature deprived of ceremony and stripped to revelation.
That sort of sounds like a status update or a tweet, or a Tumblr reblog — all of which espouse expediency as a kind of sincerity. The accelerated nature of online discourse, in social media especially, lays a privileged claim to the real. The participation in the group mind of social networks allows one to move beyond the limits of individual rationality (and the outdated depth psychology that depended on it); the abolishment of privacy online permits us to discard "bourgeois proprieties."

So maybe when you sign up for Facebook, you automatically become Samuel Beckett. Social media makes modernists of us all. They democratize the "genius" of modernism and make its "terrible freedom" and the smashing the humbug of bourgeois order everyone's prerogative. We can all document the self in a spirit of uncompromising full disclosure to deal with the "problem of belief" and the crisis of authenticity in the absence of transcendental truths and radically innovate with language and form. That is, we can build our personal brands on Facebook and tweet all day in LOLspeak.

Basically what aggrieved the modernists in Howe's view — the crisis of identity and truth; the ceaseless striving for real expression — is what we now tend to celebrate as fun and freedom. Much as management consultants represent precarious work conditions as liberating free agency, the modernist crises of the subject are fun opportunities for self-expression, like some of the postmoderninsts insisted. Howe seems to conclude that the modernists were a bunch of nihilists who end up tormented by their achievements: "The lean youth has grown heavy; he chokes with the approval of the world he had dismissed; he cannot find the pure air of neglect." That is, in their search for the genuine, modernists sought the "right to be forgotten" but failed. They ended up being liked too much. It will be different for us. We have forfeited that right in advance and tally the likes up to keep score in the grand game that selfhood has become. In our world, we celebrate the quantified self. To have measured out one's life with coffee spoons is an unmitigated triumph.

Friday, August 19, 2011

Marshall McLuhan Centennial (21 July 2011)

To mark the 100th anniversary of the birth of Canadian media guru Marshall McLuhan, Megan Garber has an extensive post about his ideas at the Neiman Journalism Lab site, pointing out, somewhat cryptically, that "McLuhan’s theories seem epic and urgent and obvious all at the same time. And McLuhan himself — the teacher, the thinker, the darling of the media he both measured and mocked — seems both more relevant, and less so, than ever before." I think that means that we take McLuan's useful insights more or less for granted even as they shape the contours of the debate about the impact of mediatization. McLuhan certainly wasn't afraid to make sweeping, unsubstantiated generalizations, which definitely makes his account of history occasionally "epic," but almost unfalsifiable as well. So sometimes it seems like McLuhan is just relabeling phenomena (this is a "hot" medium, this is a "cold" one) without performing much analysis, translating things into jargon without necessarily developing arguments.

Garber notes a recent essay by Paul Ford about the media's role in imposing narratives on the flux of events and regularizing time and points out that "If McLuhan is to be believed, the much-discussed and often-assumed human need for narrative — or, at least, our need for narrative that has explicit beginnings and endings — may be contingent rather than implicit." That is, the norms of our reading, or rather our media consumption generally, are shaped by existing levels of technology and how that technology is assimilated socially. We don't come hardwired with a love of stories, as literary humanists sometimes insist. Narrative conventions are always part of what society is always in the process of negotiating -- they are political, ideological, like just about every other kind of relation. McLuhan believed that new media forms would retribalize humanity, undoing some of the specific sorts of freedoms market society (which he links specifically to books and literacy) guaranteed and introducing different ways to construe it. The danger, as Garber implies, is that we will get swallowed by real time, which old media broke into manageable increments but which mew media has redissolved. This opens up possibilities of deliberate disorientation and unsustainable acceleration of consumption.

Anyway, I recently read McLuhan's Undertstanding Media and this is what I took away from it. The general gist is that print media support individualism and economistic rationality: "If Western literate man undergoes much dissociation of inner sensibility from his use of the alphabet, he also wins his personal freedom to dissociate himself from clan and family" (88). Literacy, in McLuhan's view, makes capitalist-style consumer markets possible: "Nonliterate societies are quite lacking in the psychic resources to create and sustain the enormous structures of statistical information that we call markets and prices.... The extreme abstraction and detachment represented by our pricing system is quite unthinkable and unusable amidst populations for whom the exciting drama of price haggling occurs with every transaction" (137). This ties in to the idea that humans must learn to be rational in an economic sense -- that such calculation is not inherent but socially constructed. Capitalist society (and its media) equips us with this form of reason during the process of subjectivation.

But the atomized, anonymized individuals of the literate world are prone to anomie, to being "massified." Whereas subsequent media (more immersive and real-time; accelerated) are returning culture dialectically to a more "tribal" orientation -- the "global village." We collectively try to defeat time by pursuing the instantaneousness of new media; this speed, this accelerated transience begins to undo economism in favor of some new collectivity. "Fragmented, literate and visual individualism is not possible in an electrically patterned and imploded society" (51). So it's obvious why the P2P types and the technoutopian futurists are attracted to McLuhan, who more or less established their rhetorical mode. But McLuhan occasionally issues some warnings about the mediated future as well. This, for example, seems like a prescient critique of the attention economy and recommendation engines:

Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit from taking a lease on our eyes and ears and nerves, we don't really have any rights left. (68)

And later he writes, "The avid desire of mankind to prostitute itself stands up against the chaos of revolution" (189). In other words, technology will be commercialized rather than become subversive.

McLuhan claims that "the effect of electric technology had at first been anxiety. Now it appears to create boredom" (26). That is, it exacerbates the paradoxes of choice, encourages us to suspend decision making for as long as possible, since switching among a newly vast array of alternatives appears easy. But such suspension, such switching may have hidden cognitive costs, may contribute to ego depletion. He points out how technology tends to accelerate exchange, noting that, for example, "by coordinating and accelerating human meetings and goings-on, clocks increase the sheer quantity of human exchange." This seems to be a structural fit with capitalism's need to maximize exchange to maximize opportunities to realize profit. Photographs, too, create a world of "accelerated transience" (196).

He also notes that certain technologies seek to make self-service labor possible, eliminating service requirements and prompting us to take on more responsibility for ourselves as a form of progress (36). That is, technology institutes convenience as a desirable value that trumps other values; ease and efficiency make collectivity appear progressively more annoying, a social ill to be eradicated in the name of individualist freedom, the only freedom that counts.

McLuhan anticipates the rise of immaterial labor, as "commodities themselves assume more and more the character of information" -- they become signifiers, bearers of design distinctions and lifestyle accents. "As electric information levels rise, almost any kind of material will serve any kind of need or function, forcing the intellectual more and more into the role of social command and into the service of production." Hence the rise of the "creative class" and the importance of social production, building brands and meanings and distributing them authoritatively. Manufacturing becomes a pretense for information, where the real profit margins are:

At the end of the mechanical age people still imagined that press and radio and even TV were merely forms of information paid for by the makers and users of "hardware," like cars and soap and gasoline. As automation takes hold, it becomes obvious that information is the crucial commodity, and that solid products are merely incidental to information movement. The early stages by which information itself became the basic economic commodity of the electric age were obscured by the ways in which advertising and entertainment put people off the track. Advertisers pay for space and time in paper and magazine, on radio and TV; that is, they buy a piece of the reader, listener, or viewer as definitely as if they hired our homes for a public meeting. They would gladly pay the reader, listener, or viewer directly for his time and attention if they knew how to do so. The only way so far devised is to put on a free show. Movies in America have not developed advertising intervals simply because the movie itself is the greatest of all forms of advertisement for consumer goods.

McLuhan insists that "the product matters less as the audience participation increases" -- that is because that participation is the product, the manufactured good, the pretense. "Any acceptable ad is a vigorous dramatization of communal experience," McLuhan claims (228); by this I think he might mean that ads plunge us into visceral experience of what Baudrillard calls the "code" of consumerism. McLuhan asserts that ads draw us into neotribal experiences of collectivity; I think this claim is undermined by the rise of personalization and design ideology. We collectively participate in the idea of customizing our consumer goods, but finding a unique angle on this common culture is the main avenue for hipster distinction. We craft our own niche for ourselves, and become anxious to isolate ourselves from others within the various constituencies brands create for themselves. Belonging to the communities facilitated by media products fosters a simultaneous tension to escape their embrace, o make one's participation singular. That is to say, media participation is as competitive as it is collaborative.

In the last chapter, McLuhan says this of the future of work:

The future of work consists of earning a living in the automation age. This is a familiar pattern in electric technology in general. It ends the old dichotomies between culture and technology, between art and commerce, and between work and leisure. Whereas in the mechanical age of fragmentation leisure had been the absence of work, or mere idleness, the reverse is true in the electric age. As the age of information demands the simultaneous use of all our faculties, we discover that we are most at leisure when we are most intensely involved, very much as with the artists in all ages.

This sounds a lot like the autonomist idea of the general intellect, which kicks in after automation becomes standard in industry. McLuhan's way of putting it: "Many people, in consequence, have begun to look on the whole of society as a single unified machine for creating wealth.... With electricity as energizer and synchronizer, all aspects of production, consumption, and organization become incidental to communications." He suggests that the only profession of the future will be teacher. We will be all teaching each other new ways to please and divert ourselves, new ways to want more things. Learning itself becomes "the principal form of production and consumption" (351). That sounds like a good thing, but one must factor in the ramifications of widespread, institutionalized narcissism, which leads us to become experts in one very particular subject: ourselves. When the alleged structural unemployment subsides, this is the sort of service economy we will be left with -- the full flowering of communicative capitalism. We are consigned by automation to industrialized, mass-produced individuality that we must never stop blathering about.

Feedback Loops and Self-Consciousness (7 July 2011)

I tend to view reflexivity as a burden, the cost one pays for the broader freedom to shape one's own destiny that modern life has brought to people in wealthy countries. Modernity has brought mediation and mobility and a certain amount of anonymity that lets us become what we want to be, but determining what that is requires a paradoxical sort of self-knowledge -- taking active steps to become what one is supposed to inherently be. This condition is what sociologists like Giddens call ontological insecurity. I'm ceaselessly arguing that technological developments exacerbate this condition while pretending to ameliorate it, mainly because capitalism works better with insecure subjects.

So I'm pretty skeptical of the "quantified self" movement and other efforts to increase the amount of self-knowledge we are burdened with at any given moment. These seem to fundamentally split us, imposing mind/body problems onto us technologically. And they also seem to become self-surveillance, with the data collected on oneself being made available to outside parties for purposes of social control.

I am not persuaded to think otherwise by this Wired article by Thomas Goetz praising the magic power of feedback loops, a banal commonplace idea that is treated here like its some disruptive innovation. Yes, when people are given information about themselves in real time, they will generally change their behavior. In other words self-monitoring changes the self. The observer effect holds for self-awareness. But I'm not sure the resulting changes can be regarded as automatically beneficial; that seems like naive positivism to me. And I couldn't get past my sense that the article existed ultimately to hype a bunch of tech companies and their great gifts to the world, smart meters for the self: "The feedback loop is an age-old strategy revitalized by state-of-the-art technology. As such, it is perhaps the most promising tool for behavioral change to have come along in decades." At points, Goetz's rhetoric is breathless, as when he discusses David Rose, founder of a company that makes devices that get users to take medicine.

Borrowing a concept from cognitive psychology called pre-attentive processing, Rose aims for a sweet spot between these extremes, where the information is delivered unobtrusively but noticeably. The best sort of delivery device “isn’t cognitively loading at all,” he says. “It uses colors, patterns, angles, speed—visual cues that don’t distract us but remind us.” This creates what Rose calls “enchantment.” Enchanted objects, he says, don’t register as gadgets or even as technology at all, but rather as friendly tools that beguile us into action. In short, they’re magical.

Yes, very magical when technology can program us unobtrusively. Take away the benevolent aim of these particular devices and what's left is design as propaganda. How enchanting.

Goetz buys the argument that feedback loops cater to humans' innate striving and are an expression of evolution at work rather than the extension of a regime of quantification and data generation.

Evolution itself, after all, is a feedback loop, albeit one so elongated as to be imperceptible by an individual. Feedback loops are how we learn, whether we call it trial and error or course correction. In so many areas of life, we succeed when we have some sense of where we stand and some evaluation of our progress. Indeed, we tend to crave this sort of information; it’s something we viscerally want to know, good or bad. As Stanford’s Bandura put it, “People are proactive, aspiring organisms.” Feedback taps into those aspirations.

All these propositions seem ideological to me: that learning is a matter of self-monitoring, that success must be measured to be valid, that humans inherently crave confirmation of individual status, that feedback taps pre-existing aspirations rather than inculcating them. These propositions support the overriding idea that self-regulation must be put in service of facilitating competition -- the capitalist way, and the essence of the form of subjectivity assumed by neoliberalism. The meaning of our existence is to be calculated on life's great balance sheet, with feedback loops allowing us to perform the requisite accounting duties. At the same time, feedback implicitly makes us personally responsible in real time for the performance being measured. The gadgets that give us real-time feedback are part of the neoliberal imperative to shift risk on to the individual, making concrete the idea that you alone are responsible for how society is failing you. It's right there in the numbers that you need to try harder.

All of this is to say that feedback loops are mechanisms of social control that are all the more effective as they masquerade as self-regulation; they are not liberating forces bequeathed by magic technology firms to help us improve ourselves according to some transcendent goal for ourselves that we devise.

Vagaries of attention (1 July 2011)

There's an important distinction between attention and recognition, though I think we easily confuse them in speech and in practice. We seek attention when we want recognition, some sense of our worth or integrity to others. Attention is a necessary prerequisite for recognition, but doesn't always lead to a feeling of having been recognized. My main fear about social media is that is becoming harder to translate attention into recognition without their aid. Increasingly, attention that isn't in some way mediated seems inert, if not unsettling and creepy.

I have this feeling that people are going to become more and more wary of direct face-to-face attention because it will seem like it's wasted on them if it's not mediated, not captured somehow in social networks where it has measurable value. I imagine this playing out as a kind of fear of intimacy as it was once experienced -- private unsharable moments that will seem creepier and creepier because no one else can bear witness to their significance, translate them into social distinction. Recognition within private unmediated spaces will be unsought after, the "real you" won't be there but elsewhere, in the networks.

I have an essay up at the New Inquiry about artist Laurel Nakadate, whose work is, I think, about this emerging condition -- about becoming increasingly unavailable to attention in the moment, wholly ensconced by self-consciousness. Receiving attention in real time can't confirm anything about how you want to feel about yourself; it becomes a portal to a deeper loneliness -- the way out seems to be to mediate the experience, watch it later, transmute it into something else. In short, we are losing the ability to feel recognized in the moment, which strands us further and further away from fully inhabiting our bodies in the present. We are always elsewhere, in the cloud.

Facebook Updates and Disinformation (30 June 2011)

Sociologist Nathan Jurgenson has an interesting post about Facebook and his skepticism about proclamations of the end of privacy and anonymity. He deploys the postmodernist/poststructuralist insight that each piece of information shared raises more questions about what hasn't been said, and thus strategic sharing can create different realms of personal privacy and public mystery.
We know that knowledge, including what we post on social media, indeed follows the logic of the fan dance: we always enact a game of reveal and conceal, never showing too much else we have given it all away. It is better to entice by strategically concealing the right “bits” at the right time. For every status update there is much that is not posted. And we know this. What is hidden entices us.
I think this is missing the point. I feel like I need to use all caps to stress this: LOTS OF PEOPLE DON'T WANT ATTENTION. They don't want to be enticing. Privacy is not about hiding the truth. It's about being able to avoid the spotlight.

The people who are freaked out by Facebook are the ones who aren’t trying to create an air of mystery about themselves. They are people who don’t want additional attention and don’t want to be snooped on, and don’t want to raise more questions and interest about themselves every time they are compelled to share something or inadvertently share something online. Something as simple as RSVPing for a Facebook event can set off a chain reaction of unwanted curiosity and accidental insult if one’s not careful. But social media mores force us to make such RSVPing a public matter, because it benefits the event thrower to pad out the expected crowd, etc. Sharing usually doesn't serve a personal agenda, even one of self-promotion. Often sharing is default exposure that helps someone else sell your attention and presence (to advertisers, etc.)

The fact that every piece of information is incomplete is precisely why people feel overexposed, because it means that everything that gets shared (often against their will) invites more scrutiny into their lives. This is why they feel like they have lost their privacy. Not because perfect information about them is out there, but because the teasing bits of information circulating seems to orient the surveillance apparatus on them. And that surveillance apparatus is distributed so widely, it feels inescapable that speculative information will be produced about you and attached to your identity online for anyone to find. That is the problem, not oversharing. The end of anonymity is not about people knowing accurate things about you; it’s about enough people who know you being in the micro-gossip business to make you feel unfairly scrutinized and libeled.

Jurgenson points out correctly that “‘Publicity’ on social media needs to be understood fundamentally as an act rife also with its conceptual opposite: creativity and concealment.” It also needs to be understood that of course people who are comfortable with sharing are not exposing their authentic character—even if there were such a thing as an authentic self. The point is that they enjoy constructing that pseudo-celebrity self through social media and feel recognized when they are gossiped about and circulated. But the rest of us are being forced to play their game, on their terms, at an inherent disadvantage.

I don’t want to have to send out reams and reams of disinformation online to “protect” my privacy (which is one of the reasons I am not on Facebook anymore. I thought it was stupid that I only logged in to it to play defense). I don’t want to share only to bury things that have come to embarrass me or prompted responses I don’t like. I don’t believe I have the time or inclination to try to imagine what new creative interpretations and lacunae I can generate with my sharing so as to convey the right impression, cultivate the right sort of fascination with me. I don’t want to be “fascinating.” I don’t want to be “seductive” in the Baudrillardian sense and “create magical and enchanted interest” (to use Jurgenson’s phrase). Others may revel in that fantasy, but I don’t want to have to adopt their code if I can help it.

But it may not be tenable for me to avoid it for much longer. For instance, events I might want to go to will be publicized only through Facebook, and I will end up missing out on everything if I’m not trackable there. Everyone I know will be in the social-media circus tent, and I will have to join them.

Social media confronts us with how little control we have over our public identity, which is put into play and reinterpreted and tossed around while we watch—while all the distortions and gossip gets fed back to us by the automated feedback channels. Some people find this thrilling. Others find it terrible. It’s always been true that we don’t control how we are seen, but at least we could control how much we had to know about it. It’s harder now to be aloof, to be less aware of our inevitable performativity. We are forced instead to fight for the integrity of our manufactured personal brand.

Vancouver Riots (17 June 2011)

I think this photo tells us a lot about the riots in Vancouver after the local hockey team lost game 7 of the Stanley Cup finals at home.


(Image: Anthony Bolante/Reuters)

Here we have a photographer taking a picture of a bemused-looking videographer as a car burn picturesquely in the background. The destruction was devoid of political purpose, but this photo seems to convey it as a collective expression of the citizens' desire to put themselves into a recordable moment. The semi-iconic photo of a couple making out in the street as the world burns (authenticity questions -- addressed here) reinforces the impression that the riot was primarily a stage set for striking images of quirky individuals expressing their dynamism in the streets rather than challenging anything about the existing order. Rioting, we learn from these images, is mainly about getting good souvenirs of one's participation. When rioting, you should be sure to be fully yourself and make sure you take lots of pictures. (It's worth looking at these photos of the riots in Greece for a contrast. That is how "bad riots" go down in distant, less privileged places, as opposed to the party out of control atmosphere here in the first world.)

That seems like a pretty good way of ultimately neutralizing the political potential of riots, not merely because it reduces the collective aspect to pretense rather than the end result (solidarity is not forged but dissipated) but because it embeds street protest in the heart of a self-surveillance ideology. Writing for McLean's, Andrew Potter makes the point that "Any proper discussion of the riot and why it occurred has to start with the recognition that rioting, especially for young men, is a huge amount of fun." Rioters just need to know that a bunch of other rioters will be around -- they need a known occasion and an accepted focal point for where to start, flash mob style -- and then they are off: "Particular events, like Stanley Cup Game Sevens, become natural social focal points for “reliable riots” — or reliable opportunities to riot.... Once a city becomes a known focal point for rioting, then a bunch of people show up to just to riot (indeed, they will even travel great distances to do so), precisely because they know that a bunch of other people are also going to be showing up to riot."

The best way to fight that, he notes, is to use images and video of the riot, some of which can be culled from social media, to prosecute enough people for their behavior to make subsequent rioters think twice. Potter writes: "The Vancouver police are currently gathering videos and images of the rioters and crowdsourcing their identities. They won’t catch everyone, but they will probably identify enough people that it will serve as a huge deterrent to future riots."

I think there is something generally applicable about this apparent contradiction, in that we apparently enjoy the perks of self-surveillance and exhibitionism as emblems of our spontaneity enough to forget how they may eventually be used against us.

Fashion and intellectual property (9 June 2011)

Some parting thoughts on fast fashion and social media: A section that was cut from that essay I wrote for n+1 about fast fashion had to do with "open innovation," or the laissez-faire attitude toward design piracy advocated by some consultants. Here's how my essay read at one point:
By skirting the borderlines of stylistic piracy, Forever 21 exemplifies so-called open innovation, cherished by consultants who argue that an enhanced and highly exploitable creativity is unleashed under a less rigorous intellectual-property regime. A 2005 tract from the Norman Lear Center (pdf), a think tank, approvingly described how “the past is constantly being plundered for ‘new’ ideas. Stylistic elements are routinely appropriated from the most unlikely places -- Polynesian islands, urban street corners, stock-car races, bowling alleys -- and transformed into new trends. In fashion, nearly every design element is available to anyone for the taking. Any fashion design, one might say, is ‘ready to share.’ ”
Of course, share is the operative word for another burgeoning business. Facebook and other social-media companies have a similarly parasitic business model that depends on appropriating freely shared material and repurposing it as data for marketers. Just as Forever 21 pushes the boundaries of intellectual property, Facebook continually oversteps established norms of privacy, opting users into data-divulging mechanisms by default and backpedalling only when confronted with public outcry.
There are a lot of quotable passages from that "Ready to Share" paper about fostering an "ecology of creativity" and fomenting a "churning tide of innovation" that leads to hyperefficiency and so on, all of which double as useful rationalizations of enclosed communication channels like Facebook. My title for the essay was more or less inspired by this line: "We believe that the styles of creative bricolage exemplified by fashion and new digital environments embody a new grand narrative for creativity, born of ancient tradition." I, on the other hand, believe that the new "grand narrative for creativity" is the current ideological alibi for subsuming everyday life and sociality to capital. Social media allege to enable creativity when they are just appropriating it.

I was reminded of this lost material by this post by GMU economics Phd candidate Eli Dourado, who refers to IP restrictions in fashion as a "tragedy of the anticommons," following legal scholar Michael Heller (jstor). This is what happens when "multiple owners are each endowed with the right to exclude others from a scarce resource, and no one has an effective privilege of use." Dourado points out that fashion "is entirely about signaling. Inframarginally, signaling generates information and serves a useful social function, but at the margin, it’s better if fewer resources go into signaling. For instance, if you impose a tax on the signal that causes everyone to signal half as much, information is preserved and the status of every individual remains the same, but fewer resources are consumed." In other words (as I understand it), the fashion industry seeks to promote inefficiency in signaling, with lots of redundancy and confusion and overdetermination and interpretation problems, generating "information" that no longer serves a "useful social function" but instead drains resources from other uses and binds it to endless acts of decoding. The solution, Dourado suggests, is to stop rewarding fashion innovation by halting IP protection in the industry. I don't think that will work, as the utter lack of intellectual property rights has done nothing to stem the flow of "sharing" in social networks, though admittedly sharers don't perceive their self-fashioning explicitly as innovation (yet).

Also cut from my n+1 essay was a bunch of material from sociologist Gilles Lipovetsky's book The Empire of Fashion, which I think lays the groundwork for an argument that connects the dissemination of fashion with emergent neoliberal subjectivity. He basically argues that fashion accustoms us to accept constant change as freedom of expression rather than insecurity, as providing opportunities for creativity rather than conformism. Of course, it is neither one or the other of these things, but both simultaneously. One of capitalism's great psychological coups is that it allows us to be creative conformists.

Anyway, this is what I originally had drawing on Lipovetsky:
"The consciousness of being an individual with a particular destiny, the will to express a unique identity, the cultural celebration of personal identity were 'productive forces,' the very driving forces of the mutability of fashion,” Lipovetsky argues in The Empire of Fashion. Tracing the development of couture, Lipovetsky claims that "what formerly appeared as signs of class and social hierarchy had a tendency to become increasingly, although not exclusively, psychological signs, expressions of a soul or personality." This allowed fashion to corner the market on giving ordinary people opportunities for, in the words of haute couturier Marc Bohan, "the renewal of their psychological makeup." The promised transformational potential makes fashion, as Lipovetsky notes, "the first major mechanism for the consistent social production of personality" -- that is, our first reflexive sense of self, set in terms of those constantly shifting social meanings, an identity not foisted upon us by birth and tradition but one for which we must hold ourselves personally responsible.
As fashion strays from its role in expressing established hierarchies, it becomes a form of institutionalized insecurity, laundered by the personal expression and individualism it appears to authorize. It yokes us all to the zeitgeist, eradicating the orienting effects of tradition and leaving us all more vulnerable to existential doubt. What Lipovetsky tends to call a “right to personalization” is at once also an ontological burden, the emergence of a permanent identity crisis.
If we don’t have the right to a self simply by virtue of existing, then how do we justify our conviction that we are somebody? How do we prove it to the world? Lipovetsky argues that people respond to fashion’s destabilization of the self by embracing fashion more thoroughly. Having “generalized the spirit of curiosity, democratized tastes and the passion for novelty at all levels of existence and in all social ranks,” he argues, “the fashion economy has engendered a social agent in its own image: the fashion person who has no deep attachments, a mobile individual with a fluctuating personality and tastes.”
Lipovetsky’s “fashion person” is a precursor for the social-media enabled personal brand. A cursory glance at Facebook reveals all sorts of “fashion people” harbored there. We have to watch ourselves become ourselves in order to be ourselves, over and over again. This futile process crystallizes in the irrepressible ideal of youth, the time when all that reflexivity seemed like second nature, was authenticity itself. As Lipovetsky notes: "The exaltation of the youthful look is ... inseparable from the modern democratic individualist age whose logic it carries to its narcissistic conclusion. All individuals are in fact urged to work on their own personal images, to adopt, to keep fit, to recycle themselves. The cult of youth and the cult of the body go hand in hand; they require the same constant self-scrutiny, the same narcissistic self-surveillance, the same need for information, and the same adaptation to novelty."
For me, that is a pretty succinct description of what social media are for, preserving the illusion of youth in a space that doesn't countenance the past or the future but only the now.

Subgrouping in Social Networks (27 May 2011)

In this SAI item about "the end of the social network era," Jason Schwartz claims that people want to continue sharing as much as possible automatically, only they want to share it only with select friends:
When people talk about privacy concerns around check-ins, it's not that they don't want ANYONE to know where they are, they don't want EVERYONE to know where they are. The problem is the lack of tools to create the dynamic, intimate groups of people they would be comfortable sharing with. Once that problem is solved, this space will see a huge uptick from mainstream users.
Facebook has quite a few tools for managing groups of friends. These fail because they rely on the user to manually curate these groups. Users won’t do the manual work necessary to make a Social Circle work, just like they won't be selective with whom they friend on a check-in service.

It's no surprise that people don't seem to bother with Facebook's group controls very much. It runs counter to the initial attractiveness of social networking -- that everyone from your life ends up there -- to then turn and use it as an awkward means of negotiating the various levels of intimacy that social networks compress. The novelty of using social networks is in that flattening, perhaps. Using Facebook is a temporary escape from the negotiations and anxieties over who is in and out: The interface presents itself as a way to control it all and permit us to consume sociality on strictly individualistic terms. But once you start having to make decisions about who belongs where and who gets to see what, you get into trickier turf of reciprocity -- every update restricted to a certain audience then becomes an implied insult to those on the outside. Restricted messages to a private audience is no big deal in real life; it happens all the time and is an ephemeral moment of choice that isn't archived forever online as a proved potential sleight.

Schwartz then claims that technology must now solve the problem of people being too lazy to form smaller groups within the larger group of contacts they assemble in online networks: Some clever startup must figure out a way to balance our desire for as many tokens of attention as possible from the widest set of contacts with our somewhat vestigial privacy concerns. But it seems to me that certain properties of online space militate against forming "intimate groups," no matter how dynamically or automatically it can constitute them. (Incidentally, nothing bothers me more than when Gmail tells me who I should add to an email. Stop telling me who to talk to! Stop trying to get me to run my private email like I am a corporate middle manager negotiating different workgroups!) Social networks serve as archives and scoreboards more than private drawing rooms; the time-space of exclusivity runs against the medium's accumulative nature. That is to say, social media are by definition for broadcasting, not for the sort of communication that sustains intimacy.

I can't be alone in regarding virtually everything on social media as purely informational or promotional, akin to Christmas-card-letter copy that has no emotional valence whatsoever. Unless it is a direct and private message, communication through these networks assimilate me to an implied mass audience as I consume it. Evoking a feeling of belonging is not part of the circuit. I get the sense that all social media discourse merely serves to reinforce the significance of what is absent, what is deliberately held back. And that material must be kept offline to retain its aura, as everyone seems to know by now that anything mediated online is always already as good as broadcast. The "social circle" is going to be defined specifically by face-to-face encounters and the communication that takes place during them. If you are not getting facetime, you are not in the social circle, no matter how much of someone's social media you are privy to.


Celebritization and Resistance (26 May 2011)

As I inevitably tend to view social media through the lens of my personal experience, I tend to degender it and talk about it in the asexual terms of corporate branding, with adults pursuing their attentional accumulation and personal affirmations like canny entrepreneurs, deploying a kind of abstracted informational commodity in tweets and updates and links and so on. But clearly the imperative of self-exploitation strikes us all unevenly, and it's conditioned by gender and race and class and social capital and innumerable other things that would be impossible to totalize. Because they quantify attention, social media can make it seem as though it is an abstract commodity, like money, uniform and commensurate, but that masks the the fact the different forms of attention available have different consequences for subjectivity. It's one thing to have, say, your blog post on Spinoza tweeted; quite another to have your picture reblogged on Fuck Yeah Self-Shooters. And it is not as though the sort of attention we can seek is some entirely autonomous choice. What is available, or what we know is possible or attainable, is highly conditioned by where we are already situated.

If there is any commonality among the various snares of the attention economy, it may be in an underlying reflexivity, an awareness of identity as an alienated thing rather than a lived-in spontaneity. Reflexivity introduces absence into any sense of presence, an awareness of audiences not directly accounted for which nonetheless must be played to. Still, the different ways in which we are enticed to seek attention, the different audiences we are brought to suspect are in our reach, require different forms of resistance, different precommitments perhaps. So while I feel myself being sucked into a personal-branding spiral; women face a temptation to sexualized modes of attention-seeking that I don't experience.

So the online objectification and fetishization of young women, the generalized idea of their availability, drives a disproportionate slice of the online attention economy, authenticating a certain kind of attention's value and rooting vicariousness in a subject position that's simultaneously powerful and vulnerable. I imagine it as an alluring opportunity to become money, as fungible and fecund and purposeful as money can be, though it is hard to anticipate what it must be like to be spent. Here's how danah boyd put it in an essay that looks at the attention economy, "celebritization" and its consequences mainly for teenage girls.
Celebrity becomes a correlate to a perfect life -- money, designer clothes, and adulthood. What being a ‘celebrity’ means is discarded; fame is an end to itself with the assumption that fame equals all things awesome despite all the copious examples to the contrary. So teens only hold on to the positive aspects, hoping for the benefits of becoming famous and ignoring the consequences.

boyd is writing in response to this Rolling Stone profile by Sabrina Rubin Erdely of a teenager, Kiki Kannibal, who used MySpace to achieve notoriety but as boyd puts it, "lacked the resources to handle the onslaught and never made it big enough to recoup the ground she lost to weather the fame." Because Kiki uses her sexuality for attention but is not sanctioned through a big-media-supported alibi, she is "under attack."

Rubin Erdley explains why Kiki Kannibal doesn't simply go offline:

She can't go offline. One reason is practical: Kiki has a business to run. But the other reason is more existential: If she were to go offline, her link to the world would disappear. This is a girl with 12,000 Twitter followers whose actual life is empty of real relationships. She's trapped in suburban isolation; outside the bubble of her family, her most meaningful interactions are electronic. In real life, she's lost.

The implication seems to be that her pursuit of celebrity has precluded her ability to generate local ties of affection. Learning how to market oneself online is a different skill from learning how to maintain friendships. boyd suggests that girls get trapped in such dynamics because "fame is a toxic substance," noting that "when the attention is good, it’s really good and it feels really good. And when the attention fades, people can feel lonely and anxious, desperate for more, even if it’s negative attention." I wonder if that isn't pathologizing the desire for celebrity too much, pinning it on personal psychology rather than the structure of the new media form that amplifies these tendencies into potentially destructive compulsions. Social media seem to systematically efface the possibility that the desire for fame might not be universal; they do a poor job of allowing people to calibrate their exposure, which is always theoretically infinite despite whatever temporary barrier privacy settings erect. (Anyone within the network can liberate the private information, which in any event belongs to the social media company, not the user.) In other words, I agree with boyd when she writes, "it’s high time that we start reflecting on the societal values that are getting magnified by them."

Maybe this is a bit hyperbolic, but this Vanity Fair story by Amy Fine Collins about prostitution and human trafficking seems the grim logical extension of what the Rolling Stone story describes. The self-coercion boyd mentions with regard to celebritization is arguably on a continuum (albeit a very, very long one) with the horrors described in the Vanity Fair piece, which would make certain Internet users into the equivalent of johns. A lawyer Fine Collins interviewed for the VF story says, “Johns don’t understand what they’re contributing to. It never occurs to them that the woman who is smiling is being abused. They really don’t know what’s going on—and they don’t care." Internet use sometimes seems to invite that kind of indifference, if the manifold examples of trolling, bullying, voyerurism and generalized deinhibition are any testimony. (The Rolling Stone article is about these tendencies as much as it is about the drive for microcelebrity.) The governing ideology seems to be that all the information online has been voluntarily provided and is thus ultimately fair game, as if there are no gray areas or no shifts in context or no possible regrets on the part of the volunteer that should check a consumer's impulse to enjoy it on whatever level one chooses. Social media's immediacy can make it seem as though this sort of heedless entertainment is an entitlement; the underlying implication is that women are naive if they think otherwise. The consumption of microcelebrity is not governed by the norms of friendship, despite social media's liberal usage of its terminology.

Retracting the Concept of the Individual (17 May 2011)

There seems to be an object lesson in this O'Reilly article by Jen Webb about Openmargin, an iPad application in development that will force users to share their marginalia in one giant shared space. Webb asks the company co-founder, "Is there an option to notate only for personal use (i.e. notes for a class)?" And his response strikes me as deeply troubling:
Joep Kuijper: There are no personal groups. All the readers of one book are the group. Or even more specific: the readers around one sentence are a group. This also means you're not in a dialogue with friends, but with peers you've probably never met before. We think this is the interesting thing about Openmargin. It's an implicit network where the relationships are based on the specifics in a text. And your relationships develop and grow along with your reading habits.

It seems like he deliberately misses the point, or his response has been edited to make him miss the point. Instead the ideology of compulsive sharing is reinscribed. What possible reason could you have for not sharing all your thoughts so random strangers (and Openmargin) can profit from them? That would be unthinkable selfishness, and he won't even entertain that possibility. And that specificity of the context of a particular book (or even sentence, as he notes) makes the sharing, the mandatory dialogue, all the more necessary. Would a note function be useful for one's impromptu half-formed reactions to a text? Maybe, but every note you write on Openmargin must be ready for the world; those are the terms by which they are willing to provide the server space and the facilitating programming. They have learned from Facebook: "Community" is their product, apparently, more than the service itself, since it was so thoroughly eviscerated by capitalism. Now capitalism has come to rehabilitate it on its terms. This is just one example. Capitalism is solving the problem of alienated isolation it created by supplying more capitalism. Which makes one wonder if the contradictions capitalism generates will ever be enough to undermine it.

Thursday, August 18, 2011

Rent to Pwn (22 April 2011)

I am all for people buying less stuff and sharing more of their stuff with people. I think it is great when people figure out how to give stuff they are getting rid of to people who want to use it. But I am fairly skeptical of "collaborative consumption," which wants to monetize these impulses in new tech startup companies. A current Fast Company article by Danielle Sacks covers this emerging sector, what Silicon Valley investor types apparently call "underused asset utilization" ventures, which includes car sharing and other co-op-like schemes, peer-to-peer rental services, and eBayish resale sites. There is so much scary dystopian potential in what Sacks reports that I almost choked on the omelet I was eating while reading it (at which point it occurred to me that if I coughed it up, maybe I could re-sell it to a underfed peer).

"Underused asset utilization" strikes me as a far more honest term than "sharing," and it certainly sheds a new light on what Facebook is doing: getting us to conceive of ourselves, our social lives and our identity as "underused" assets to exploit. (My web-surfing history is really too valuable to keep to myself -- and really, how can I monetize my friends? What else are they good for? Are they even my friends if I can't profit from them?) The governing principe with all these ventures is to use the deeper mediation of people's personal lives (through smartphones and social media) to get them to behave even more like little capitalist firms. Consider the way Sharable founder Neal Gorenflo talks about his own life:

One afternoon, after a jog through the parking lot of his Brussels hotel, he quit his job. Since then, Gorenflo has deconstructed every aspect of his personal and working life, "removing all the things that don't add value and concentrating on the things that deliver value." Andrea made the cut -- she's now his wife.

Is there anything more to life than adding value? Not from this point of view. "Sharing" might evokes images of potlatch and competitive altruism, but in the world of underused asset utilization, sharing has nothing to do with gifts and everything to do with efficiency, with bringing rationalized use of capital resources to ever single nook and cranny in one's life, even if you had no intention of being a entrepreneur or a capitalist. The backers of these ventures imagine the purpose of the social web and "connections" is merely to make self-entrepreneurializing easier, near automatic. So when you are about to "underuse" some "asset" in your everyday life, your smartphone can intervene to rent it to someone else (and let some Silicon Valley venture capitalists get their cut). "I'm looking at virtually every resource and finding ways to extract additional value or productivity from it, from food to gardens to skill sharing," says one investor. That sounds awesome, a real recipe for joy. (It reminds me of Marx's "Accumulate, accumulate! That is Moses and the prophets!")

Gorenflo tells Sacks, "Business has spent centuries making buying really easy. We're just at the beginning of making sharing easy." This had me confused at first, because I don't really get how firms can be capitalist if they are not seeking profits through selling things. This confusion could prompt a person to think that the "sharing economy" is some sort of postcapitalist potentiality -- as when Rachel Botsman, the co-author of a book about "collaborative consumption" declares, "This could be as big as the Industrial Revolution in the way we think about ownership." But these companies are not out to usher in the end fo private-property rights. Botsman says that she realized, ""I just can't help companies sell more stuff," but that doesn't mean that she's looking to decrease the volume of capitalist exchanges or slow the velocity of commerce. It means selling services rather than stuff, brokering the exchange of existing goods between parties, like a pawn shop owner. The for-profit sharing companies are still operating like capitalist companies; only they are now "platforms" that root exploitation deeper into the lives of the independent contractors (i.e. the customers they "serve") they use to generate work product.

The part of the article that struck as most depressing, though, is the development of private "reputation companies" that hope to create a "trust rating" similar to a credit rating based on one's online behavior, the "data exhaust" one's mediatized activity generates.
The challenge that worries everyone in the sharing world, of course, is trust. It's one thing to believe that a knitter on Etsy will mail you that crocheted beret. It's another to let a stranger sleep in your home or borrow your second-most-expensive asset, your car. "Sharing of the kind we're talking about really only works when there's reputation involved," says Freestyle's Felser.... Almost all (including AirBnB) require profiles for both parties and feature a community ratings system. But these ratings would carry far more weight if they traveled with you across the web, so that your eBay reputation helped inform your standing on AirBnB. Startups like TrustCloud would like to become the portable reputation system for the web. The company is building an algorithm to collect (if you choose to opt in) your online "data exhaust" -- the trail you leave as you engage with others on Facebook, LinkedIn, Twitter, commentary-filled sites like TripAdvisor, and beyond -- and calculate your reliability, consistency, and responsiveness. The result would be a contextual badge you'd carry to any website, a trust rating similar to the credit rating you have in the offline world.
By all means, let's automate trust. Who doesn't love the arbitrarity of credit ratings and the way they rob one of a sense of autonomy? Yes! Let's extend that principle more generally, and let computers assign a number to the quality of our ethical character overall, based on how much garbage we look at online! That is a beautiful idea. Who wouldn't "choose to opt in"? (As if opting out won't come to signify having something to hide if, god forbid, these systems catch on -- you already can't really opt out of Facebook and belong socially in the accepted way in the U.S.) Let's let an algorithm score our cultural capital and make that number known so people can more efficiently judge whether it is worth their time to interact with us. Splendid! How efficient will my "sharing" and appropriating become then, when I know who the losers are after a glance at a spreadsheet?

I guess this is why Amazon is constantly pestering me to "rate my transactions" all the time. They can't leave me alone and let me have my book; they want me to volunteer my feelings to grease the wheels of their distribution and rent-extracting mechanisms. I don't want to help build reputations; I want to reserve the trust that my minuscule contribution to this world can build for something better. I don't want to be part of a panopticon that is deemed socially necessary to keep people from cheating one another under a generalized Hobbesean regime of hypercapitalism in which every single gesture and every single thing I think and do is theoretically for sale and is thus a theoretical cheat. Offshoring "reputation management" to private companies seems like a terrible way to build general trust in a society, as does assigning people a numeric rating. These are indications that trust doesn't generally exist, and shouldn't be expected from one another. It's a neighborhood watch society where people are only kept in line through fear, not sympathy.

One of the positive aspects of markets (and possibly the whole point of trust-building between parties) is that they can facilitate anonymous exchange and foster privacy -- granting a modicum of autonomy to getting and spending and owning. What you buy and collect is nobody else's business. But to the sharing czars, it's necessarily everybody's business -- that's where their profit opportunity is, having an itemized list of your stuff. Or even your opinions. In a world where value is primarily created socially at the level of affects and signifiers and brands and so on, privacy makes you into an "underused asset." And Silicon Valley can not let that stand.

Twitter As Scoreboard, As Anomie (7 April 2011)

The Browser, an aggregator that pretty much dictates my online reading at this point, linked to this essay by Keith Lee about Twitter as gamification in action:
The Twitter system encourages following, re-tweeting and the like because it functions as a scoring system for the ego of the user. So it’s really no surprise that Twitter is generally viewed as a sort of “Happysphere” – it’s innate to the functioning of the system. Users encourage other users as part of the positive feedback loop in order to increase their own engagement statistics and push up their number of Re-Tweets, followers, etc. So despite that nothing is actually being accomplished, it generates a feeling of accomplishment.
I agree with that in principle, but my feelings are complicated by the fact that I found the link to this article on Twitter (and then promptly retweeted it). On Twitter, one can find useful information because users are incentivized to provide it, but playing on those incentives means reinforcing petty ego-scorekeeping and recessive reflexivity among those users. So is it a net win or a loss for society?

Gamification reminds me of the "hedonic treadmill," a trope in critiques of consumerism: the point is that consumers never actually achieve satisfaction through consumerism even though that is often what ideology (or "common sense") leads us to expect; instead we perpetually chase novelty and status, which turns ordinary goods into positional goods -- bearers not of utility but distinction or symbolic meaning, emblems of the scarcity pursued when sustenance is taken for granted. As Baudrillard's early work argues repeatedly, consumerism as a system turns our market behavior into an inescapable language that more or less speaks us as we speak it. It prompts us to leave use value behind and adopt an attitude of perpetual scarcity -- we don't have enough status, or we don't have enough attention to give, or we don't receive enough attention, or we don't have enough time, etc. The scoreboard mentality (which I wrote about in this 2004 column about competitive thrift-store shopping, and again in this response to an article about "shopping hackers") sustains this feeling of scarcity, inverts it, allows us to harness it as motivation.

It seems like gamification can quickly become a disguise for anomie, a ruse to make the hedonic treadmill livable. In a recent post, Will Davies cited this passage from Durkheim's Suicide, which offers a grim definition of anomie:
Inextinguishable thirst is constantly renewed torture. It has been claimed, indeed, that human activity naturally aspires beyond assignable limits and sets itself unattainable goals. But how can such an undetermined state be any more reconciled with the conditions of mental life than with the demands of physical life? All man's pleasure in acting, moving and exerting himself implies the sense that his efforts are not in vain and that by walking he has advanced. However, one does not advance when one walks toward no goal, or -- which is the same thing -- when his goal is infinity. Since the distance between us and it is always the same, whatever road we take, we might as well have made the motions without progress from the spot. Even our glances behind and our feeling of pride at the distance covered can cause only deceptive satisfaction, since the remaining distance is not proportionately reduced. To pursue a goal which is by definition unattainable is to condemn oneself to a state of perpetual unhappiness.
Using Twitter may also condemn oneself to a state of perpetual unhappiness, for similar reasons. If we inject ourselves into the infinite pool of information with the hubristic notion that we will help organize it all, we're doomed, as our contributions only add to the muddle, to the heap of notions to be processed. If we think we are going to keep up with all the feeds worth following, we're doomed too. And precisely because we know that Twitter is often useful, we're lured into diving into the whirlpool.

Deeper Into Commodification (29 March 2011)

Reihan Salam has some criticism of the ideas in a few of my recent posts here, in which he points out that as insecure and isolating conditions may be under new capitalism, they were worse before, in the Fordist era. He highlights particularly the "dramatic advances made by women in educational attainment and market production" under the neoliberal order. He suspects that I might be nostalgic for the old order of things (odd that a post at the National Review would seem to imply that I am too conservative!) and argues that Fordist conditions were a historical anomaly rather than a useful point of comparison.

I think I definitely mislaid my emphasis if I come across as though I am advocating a return to 1950s social conditions. Obviously I have reservations about how social relations are being shaped by conditions today, but I don't think we can or should try to turn back the clock. I think Salam is right that in many ways what is happening with the "proliferation of small-units, e.g., small firms and cultural niches" is a re-emergence of some of capitalism's early dynamism, providing opportunities that many experience as wholly beneficial. But whereas Salam writes that he would argue "that the collectivity and security we’ve lost is being replaced by new forms of collectivity and security that are preferable in many important respects," I think those new forms are destabilizing and subtly coercive. I don't think the old forms are preferable; I guess I keep writing about these ideas because I believe that there are new forms that aren't being realized, that may be interdicted by the necessities of a capitalist organization of the economy. That is, the new forms of social relations offer certain freedoms at the cost of having to commodify oneself at a deeper level than wage slavery required, with more of everyday life subsumed into capitalism, made business like, subject to its procedures of rational calculation. Some of what stems from that is good: At times work is more harmonious with one's overall life, at times it feels good to have the market assess the quality and extent of one's sociality, at times the flexibility increasingly expected of us prompts us to be and feel more creative, at times our self-consciousness leads to a rewarding consideration of what other people are thinking, and not just what they are thinking of us.

But the negative aspects are inseparable from those positive aspects. Richard Sennett outlines some of those negative aspects in The Culture of New Capitalism, which I wrote about here. I think social media exacerbates both the positive and negative aspects, which is a pretty nebulous position, I know, but I try to disguise it by writing mainly about the negative things. I think that social media seems to promise to be a respite from the pressures of life under neoliberalism, but it ends up serving mainly as a vector for those same pressures. That suggests a missed opportunity to have structured an emerging phenomenon differently.

Salam suggests that I am someone whom he "disagree[s] with as completely and comprehensively as one can disagree with another human," but it seems we probably agree that economist Albert Hirschman is very much worth reading. Salam links to his paper "Rival Interpretations of Market Society: Civilizing, Destructive, or Feeble?" (sadly a gated link) which I also highly recommend and which amply illustrates that there's nothing new in disagreeing about the predominant social and psychological effects of markets, whether they are civilizing, integrative, alienating, etc. Hirschman writes,

For capitalism to be both self-reinforcing and self-undermining is not any more "contradictory" than for a business firm to have income and outgo at the same time! Insofar as social cohesion is concerned, for example, the constant practice of commercial transactions generates feelings of trust, empathy for others, and similar doux feelings; but on the other hand, as Montesquieu already knew, such practice permeates all spheres of life with the element of calculation and of instrumental reason. Once this view is adopted, the moral basis of capitalist society will be seen as being constantly depleted and replenished at the same time.

That seems right to me. Social media represents new threats to and possibilities for the perceived moral basis of capitalism. I emphasize the threats because I tend to think that undermining the moral basis for capitalism might lead to a better economic system (as I think the downside of omnipresent instrumental reason outweighs the gains from doux commerce), though I am wary too about lapsing into a sort of "heighten the contradictions" approach. I think social media and personal branding and so on reveal what has been true of capitalism all along but in muted, and possibly more tolerable forms. If capitalist ideology assimilates social media, another opportunity for fundamental change will be lost and capitalistic subjectivity will have been retrenched at an even more intimate level and with our social lives more thoroughly organized along capitalistic lines, there will be fewer places from within everyday life where it is possible to imagine an alternative.

More on the Future of Work (23 March 2011)

Following up on the themes of my previous post (and untold dozens of others -- maybe I should start using a tagging system?): the P2P Foundation blog linked to a write-up (pdf) of a round table symposium put on by the Aspen Institute on the future of work. Among the participants were analysts from McKinsey and Deloitte and executives from Microsoft, IBM and Infosys, along with a bunch of other Silicon Valley think-tank types. It delineates how business elites would like to put across increasing precarity for workers as "freedom" and details some of the ideological resistance they expect to encounter.

The participants identify the emergence of the "post-Sloanist" worker (that is, the worker who exceeds the constraints of 20th century "scientific management" and organization engineering):

Every worker will have to become a continuous learner, he said, and will likely hold multiple jobs over the course of his or her lifetime, if not multiple careers. Many workers will need to work at part-time jobs and perhaps hold down multiple jobs simultaneously, he added. The ability to multitask and deal with interruptions to work will become mandatory skills.

And eventually, ADD will cease to be regarded as a disorder and will instead be an institutionalized educational outcome. The report continues, "a great deal of work is likely to become less routine and more exception-based, especially in knowledge-based jobs." It will be reactive; workers will perform triage as information pours in rather than initiate work processes.

Lots of the round table's conclusions are similar to ideas I've been harping about in previous posts about how changes in how we regard work augment the mounting stressfulness of neoliberal subjectivity. To list some of the changes: that work organization is becoming less directed and hierarchical and more a matter of "crowdsourcing" and post hoc capture; that the fixed workplace is being dispersed into a "social factory"; that firms are being supplanted by "platforms" (think Facebook); that work and leisure are becoming indistinguishable; that identity construction is merging with work, that consumption skills are increasingly regarded as productive and innovative in their own right and are being incorporated into manufacturing processes; that what constitutes work skills is also becoming more nebulous and hazily linked to education. Now, some of the participants suggested, the importance of discrete skills is being supplanted by the significance of "disposition" -- a rough analogue for what sociologist Pierre Bourdieu called habitus.

Employers must recognize that they are not just hiring a set of skills, they are hiring people based on their personal temperaments. “In a world of continual and rapid change, maybe the most important things are dispositions that allow you to embrace change,” said John Seely Brown, Independent Co-Chairman of the Deloitte Center for the Edge.... “You can’t teach dispositions,” said Brown. “You cultivate them.” Employers cannot simply communicate information to workers; they must provide a hospitable, immersive environment for workers to satisfy their dispositions and talents.

So we are enjoined to always be cultivating an identity that will justify our worth to employers, and this perpetual process of becoming our better selves is expected to constantly throw off value. The disposition becomes our job; producing ourselves becomes mandatory, and linked to our economic survival. As a result, being "yourself" has never been more stressful. "Workers will regard their work lives as an experience, a lifestyle and an identity—not just a paycheck," the report suggests, and that is supposed to be a good thing: For some in the creative-class vanguard, work seems to be increasingly unalienated (these people are entrepreneurs of their personal brand and reaping a livelihood from it) and indistinguishable from the process of just living life. And this may be the model of the workplace of the future generally. But this sort of "freedom" comes at the expense of other forms, mainly the freedom from having to capitalize on every aspect of one's selfhood.

To paraphrase from Baudrillard's The Consumer Society, which repeatedly makes the point that our needs are not autonomous and not our own invention, the need to engage in conspicuous self-fashioning has become systemic; it's no longer the product of what individuals actually want, if it ever was so. Identity has been made productive in and of itself, and thus has been subsumed, integrated into the capitalist system. Again, to recast Baudrillard, the strenuously defended right to a unique self actually betokens the loss of such lived uniqueness, its transition to an exchangeable commodity, a form of abstracted labor.

The shift away from specific job skills to dispositions raises many other issues as well: If a disposition for creativity and symbolic-meaning making is the new valuable skill set, can it be differentiated from social and cultural capital that stems from class? Can a person be taught how to make distinction (in Bourdieu's sense of the word) if one lacks the proper class distinctions to begin with, or would the fact that you needed to learn them automatically disqualify you? How do you teach flexibility and a willingness to bend rules as well as follow them, depending on the situation? How do you teach an instinct for design and cultural trends? Can there be a school that teaches hipsterism? And how much hipsterism can the economy support? (Will Davies says not as much as many aspiring bohemians would like; he advises they drop aesthetic consumption for political organizing -- good luck with that!)

"Every Person for Themselves" Economy (18 March 2011)

Justin Fox, who doesn't blog enough anymore, wrote a good post about this Democracy article by Andrei Cherny, about the "Individual Age." That's Cherny's phrase for what used to be called "the new capitalism" and what left-leaning types (like me) tend to describe as "precarity" or "post-Fordist labor conditions." The security that once came from long-term employment with large firms and the safety net supplied jointly by employers and the state (and secured through labor union activism) were discarded with the rise of neoliberalism, which preached deregulation, outsourcing, globalization and total worker flexibility (remember Who Moved My Cheese the 1990s ode to employee servility?). Neoliberal (or post-Fordist or postindustrial) relations of production dispensed with employer-employee loyalty and made most workers free agents, drifting from job to job, perpetually insecure, perpetually having to sell one's usefulness anew to employers always looking for reasons to get rid of you. Fox cites this 1997 Fast Company article, "Free Agent Nation" by Dan Pink, as an early reflection of these changes; Richard Sennett's The Culture of the New Capitalism and The Corrosion of Character are all about these developments as well, and more recently Tina Brown termed it "the gig economy."

Here's how Cherny explains this change:
The differences between the economy of the 19th century and that of the 21st are too many to list, but today, as in Jefferson’s time of independent farmers and shopkeepers, it is individuals, not large conglomerations, that propel the economy. The number of Americans working for themselves is growing rapidly and most Americans no longer work full-time for someone else. Yet it is not just the self-employed or entrepreneurs who are part of this new world. Every worker is grappling with the Individual Age. Lifetime employment with a single company is largely a thing of the past and the dependable health, retirement, and training benefits such a relationship held are fast disappearing. Americans born between 1957 and 1964 held an average of 11 jobs between the ages of 18 and 44. That trend is accelerating and is much more pronounced among those born in later decades. Untethered from large institutions, bouncing from one job to the next, today each individual is ultimately responsible for guiding their own career and economic future. Today, everyone is an entrepreneur; everyone is their own small business.
The way Cherny tells it, this "trend" is just sort of happening and no one is driving it or can do anything about it; it's just the world historical process playing itself out. This elides the actual concerted effort (ideological, political, economic, etc.) that has gone into making the neoliberal condition a reality, allowing for the needs of capital to dictate the living conditions of more and more people, subsuming more and more of everyday life.

Cherny hopes that government will change policy to support personal entrepreneurhood: "Health-care and retirement benefits should be made more personalized, portable, affordable, and universal." But as Fox points out, the emergence of the entrepreneurial self undermines the sort of solidarity necessary to lobby the government for this change to be implemented and for fairer tax treatment and the like. "To get us laws that reflect the new workplace reality, Free Agent Nation, by its very nature dispersed and allergic to large organizations, needs to develop a unified voice. Can it?" Fox asks. I'm pretty skeptical that it can; the destruction of such solidarity is almost explicitly the purpose of neoliberal reform, as the attack on public-sector unions' bargaining rights suggests. The goal is to get every person thinking and acting for themselves, which makes them most desperate and vulnerable -- oh, wait, I'm sorry, I meant free and flexible.

Pink, in the Fast Company article, paints a picture of precarity as knowledge workers making a savvy career choice to be independent, outside the corporate box and therefore "authentic." Various interviewees in the article gloat about their blissful work lives, like this serial small-time entrepreneur:
"I used to think that what I needed to do was balance my life, keep my personal and professional lives separate," she says. "But I discovered that the real secret is integration. I integrate my work into my life. I don't see my work as separate from my identity." The mask is gone. For this free agent, work is who she is.
As Pink stresses, in the neoliberal world, "work is personal" and there are no boundaries separating work and nonwork. This fits with the tendency for our consumption to increasingly serve a productive function in the economy as "immaterial" or "affective" labor, contributing to marketing innovations and enhancing brands' equity and devising new uses for goods, new consumer wants, new ways to get pleasure from spending. As more of our self-presentation can be captured digitally and tracked and amalgamated, particularly through social media and mobile communications, our identities themselves become productive factories of economic value.

Pink assert this sort of thing makes work inseparable from fun. And maybe in a noncapitalist society the disappearance of work-life separation would mean freedom from alienation, pure harmonious praxis, meaningful and recognized work for one's livelihood. But under capitalism, this lack of boundaries yields limitless insecurity, and it leads us to entrepreneurialize ourselves, see ourselves as little firms and work hard to augment our personal brand (which seems to me synonymous with neoliberal subjectivity). Rather than work becoming fun, fun becomes work, and hipsterism in it current guise is born. Our identity work becomes anxious, calculating, inescapably reflexive, the opposite of spontaneous. We become conscious that every little gesture must ideally be turned to account somehow to help us get ahead and protect our standard of living.

I think the rise of social media -- beyond merely mobilizing immaterial labor -- has had a lot to do with structuring, promulgating, and naturalizing the entrepreneurial subjectivity necessary for neoliberalism to function smoothly. They support the concept ideologically and institutionally where government doesn't, giving us a sense that we are capitalizing on ourselves and compensating for the collectivity and security we've lost. So the personal brand supplants the personality or the lifestyle -- terms that horrified earlier generations of social critics worried about commercialized identity. Now we network and self-promote and see this as the extent and purpose of social life. Of course! How else could it be? Why wouldn't we capitalize on our friend quotient, our "social graph" -- why shouldn't we measure ourselves in terms of what we can sell about ourselves? That's now how we discover what is "authentic" about us. It seems that we are approaching a point that only what we can sell or hype about ourselves seems real to ourselves -- only what is retweeted or liked on Facebook speaks to who we really are.

Inventing Attention (16 March 2011)

"Needing to Be Noticed: Understanding the Market in an Attention Economy," a paper (pdf) by Anthony Olcott of the Institute for the Study of Diplomacy, raises some interesting questions about the idea of an attention economy, some of which I would rephrase in a more quasi-Foucauldian vein. Olcott wants to know how to measure the return on investment for attention spent, so that individuals could determine how to invest attention rather than merely expend it. But I wonder if our awareness of attention as a quantity to spend is the problem, burdening us with a kind of reflexivity that cannibalizes our experience of being engrossed in an activity.

Some other, related questions: Is attention a currency, a good, or both simultaneously? Is our attention manufactured, produced, by the goods that demand it? In other words, does entertainment produce attention as a fungible quantity in our understanding as well as then absorb that quantity? More broadly, does the discourse about the so-called attention economy structure our attention as currency? Attention is not something we can ever accumulate or store, so to think of it as currency is misleading. It seems more like a product, something that marketers purchase from entertainment media. Attention is not ours to spend; it's instead a state of being that makes us sellable.

Olcott brings up the received wisdom that our sense of the scarcity of our attention is a product of the sudden information surfeit, which has made us aware of how little time we have for the information we want to consume, or -- the same thing -- the elasticity of our curiosity when information becomes cheap. Olcott has even attempted to quantify this: "the flow of information clamoring for attention has been increasing at somewhere between 30 and 60 percent per year for the past two decades, while our ability to absorb information has been growing at only about 5 percent per year (and that primarily through our growing tendency to multitask, or do several things superficially rather than one or two things deeply)."

Our perceived flow of information is, of course, a matter of the effectiveness of the filters in place for us. Many of the filters earlier generations took for granted, the ones imposed by the absence of real-time communications and efficient transmission and storage, have now been eradicated by the advent of internet and digital media. Analog media imposed stringent limits on what amount of information was transmissible and storable. As Olcott puts it, "When technology made the threshold of entry into communication high, the amount of attention relative to the amount of information to which it could be paid was relatively large." That made the limits of our media-consumption time irrelevant. But now the analog limits are gone. We become aware of time as a problem, our attention as bounded, limited -- and then in its palpable scarcity, we realize more fully its potential to be alienated as a commodity. That is, we become aware of its value, and we want that value to be convertible into other forms of value (as capitalism trains us to expect).

Relative to the automatic filtering imposed by those analog limits, the ones we are forced to impose on ourselves seem arbitrary. They require self-discipline; they seem theoretically optional, perpetually negotiable. The open-endedness makes us feel the information flow as "overload" -- it is never simply settled as what it is, and requires continual decisionmaking from us, continual reaffirmation of the filters we've chosen. My RSS feed demands more from me than a newspaper, because I'm responsible at a meta level for what information it brings me; before, my decisionmaking would end with the decision to buy a paper. Now I have to tell myself I have enough, even as the culture tells me that in general, too much is never enough, and "winning" is having more. As a result, I start to feel cheated by time because I can't amass more of it. I become alienated from it rather inhabiting it, which makes me feel bored in the midst of too many options. The sense of overload is a failure of our focus rather than the fault of information itself or the various media. Calling it "attention" in the contemporary sense and economizing it doesn't repair focus so much as redefine it as a shorter span, as inherently fickle and ephemeral.

Brokering my own attention span is my attempt to reassert control. I will spend my attention wisely and get the most out of it by investing it wisely in things that will "reward" it. But I fear that expecting to profit from paying attention is a mistake, a kind of category error. Attention seems to me binary -- it is engaged or it isn't; it isn't amenable to qualitative evalution. If we start assessing the quality of our attention, we get pulled out of what we were paying attention to, and pay attention to attention to some degree, becoming strategic with it, kicking off a reflexive spiral that leads only to further insecurity and disappointment. Attention is never profitable enough, never sufficient.

It seems to me that serendipity is a better attention-management strategy, a more appropriate way to deal with those times when we can no longer focus and become suddenly aware that we need to direct our attention somewhere.