God Really Does Hate Us All
Yesterday Full Stop reprinted my love letter to Slayer in memoriam of Jeff Hanneman. That sparked this amazing exchange.
Yesterday Full Stop reprinted my love letter to Slayer in memoriam of Jeff Hanneman. That sparked this amazing exchange.
My one beef with Derrida’s “From Restricted to General Economy: Hegelianism Without Reserve” is that it’s not at all funny. This wouldn’t be a problem except that the essay, a reading of Georges Bataille’s critique of Hegel, hinges on Bataille’s laughter. Only by taking Hegel absolutely seriously, according to Derrida, does Bataille arrive at this laughter, and only by that process—unlike Bataille’s contemporaries, unlike Derrida’s contemporaries, and unlike so many today—does he do something other than reproduce the Hegelian dialectic. The burst of laughter from Bataille is the critical break, and I had hoped to hear something similar from Derrida.
It’s not really fair to call that a fault in this superb essay. Derrida’s method is to apply the same seriousness for which he praises Bataille to Bataille’s writing so that it, like any text, might unfold beyond the bounds of what it thought itself to be. Deconstruction is nothing more than absolute fidelity to the text. It is that fidelity that matters; the reproduction of style is ornamental, and my qualm is a stupid one.
I recalled that missing moment of mirth when reading Helena Fitzgerald’s take on the Twitter “follow a teen” phenomenon. Here, though, the issue isn’t as superficial as Derrida not being able to tell a joke. It is that follow a teen, a goofy lark spun out of the extended adolescence of people who spend a lot of time fucking around on Twitter, is being taken “seriously” in a way that is not intellectually serious. It is instead a performance of seriousness. It would be like rebutting Bataille’s parodies of reason with formal proofs that the sun, in fact, is not an anus. It is like pointing out that the man from Nantucket was actually from Concord, or telling Warhol he should get a refund from the guy who fucked up his prints. It’s stodgy and sanctimonious and avoids the challenge of the new.
(I should say that Dave Thorpe, the guy behind follow a teen, is a friend of mine, just so that doesn’t get cited as my secret motive here. Even before I met Dave I read his work and thought he was clever. I would think follow a teen is interesting even if I didn’t hang out with Dave. Now that I think about it, follow a teen was one of the things that first got me into Twitter. That was when I realized there was a wishing well into which millions of people were screaming and anyone could go for a listen.)
Follow a teen and Twitter are creepy, Fitzgerald writes. Yes, I agree, there is something really interesting about the heterogeneity of public and private fields projected and accessed in a single space on Twitter. That can produce effects that are “creepy,” if you want to be terminologically rigorous about it. But that is the essence of what is interesting about Twitter: other people have subjectivities different from my own! And I can access them in previously impossible modalities! I find this wonderful and mysterious. But I can see why it would be creepy, too.
“Following” has creepy connotations, like stalking or being a cult member. Like in Kevin Bacon’s serial killer show with literary pretensions, “The Following.” In fact, in one of the early episodes Bacon mentions something about the parallels between cults and social media. But even in a hamfest like The Following the writers knew that dog wouldn’t hunt and cut out further half-baked ruminations. The literary idealist confusion between language and material reality is seductive but fallow. A serious interlocutor of Twitter would examine the relationships between followers and followees, which are again heterogenous and surprising, rather than a single word, to understand the multiple cultures built on Twitter. (When you “subscribe” to someone on Facebook, do you think that is a different experience because FB uses a different word or because it is a different network built to create and channel different types of relationships?)
Lol that anyone would write in praise of Facebook: “‘Twitter’s very language encourages voyeurism in a way that Facebook doesn’t. Where Facebook is all about ‘friendship’ and ‘connecting,’ Twitter is about following and talking about people publicly behind their backs (‘subtweeting’).” Yes: Facebook is not a giant oneway mirror. Facebook is all about friendship and connecting, not filtering your feed to ensure you see only the things that Facebook—not your “friends”—want you to see. This is a true and accurate representation of social networks in the year of our Lord 2013.
But I suppose that does answer my rhetorical question from a moment ago. As at least one person sees things, Facebook provides a service equivalent to that which I get from Weird Magazine, while Twitter only ever allows me to be the Zodiac Killer.
That came out meaner than I wanted it to sound. This isn’t supposed to be a hit piece on Helena Fitzgerald, to whom I am grateful for bringing more attention to Twitter and its bizarre dynamics. I apologize for getting ugly. There are writers who are orders of magnitude worse and I will save my bile for them. I merely disagree with Fitzgerald, which I will try to do more politely from here on out.
I think on some level Fitzgerald gets that follow a teen is an expression of intergenerational solidarity. That a 29 year old like myself is interested in teens because he remembers that time with such intense false clarity, and that teen-ness was never overcome so much as folded into a later incarnation of that self. Follow a teen is a celebration of teens. This is teenfest, with the bowl of never-ending teens. Fitzgerald’s kicker, “Your Twitter feed isn’t complete until you follow a teen, and you aren’t entertaining until you start acting like one,” might be a diss at Thorpe et al. (I hope not, since it would hinge on using teen as a pejorative) but it also points to the respect of the follow-a-teeners for the discourse of the adolescent banal. Or maybe this is meant as praise for follow a teen’s entertainment value? Perhaps the distinction between teen and non-teen collapses as the act of observing a teen confers on the observer the epistemology of a teen???
(At this point in the writing I began to seriously question my reading of Fitzgerald’s essay. Maybe it isn’t a critique of follow a teen; maybe it is a critique of all teens, Thorpe included, in which case my whole argument will have been an utter failure. Ack!)
The fucked up part is that there was a great response from Tavi Gevinson in the form of a “follow an adult” counter ‘shtag (my style guide requires that I refer to hashtags as ‘shtags so no one will ever think they are cool). Now this is an intellectually serious response. Gevinson understood the original content—an agonistic fascination with the other, but an other across a wafer thin temporal divide and with a Sophoclean fatality of becoming—and responded in kind. And was funny. She got the joke and made one back. This is what I had wanted from Derrida.
You see, follow a teen and follow an adult aren’t a “war,” as so many people have called it. It is a game. Twitter is a game of self-revelation (an RPG, if you’re into typology). Only fascists like Hegel or Schmitt or Bush era neocons look at two elements pulling against each other and declare the necessity of a war. If this a war, it’s one fought with water balloons, except both sides are smoking and muttering instead of picking up the balloons. No one takes it seriously even as a figurative war.
Gevinson made the perfect response to continue the game. And, to return for a moment to the idiom of Derrida, this is the only response that can stall the abyss that Twitter brings so close. We each hurl our words out into the aether and hope someone, impossibly, responds. And they do. That, I think, is the magic of Twitter—that it captures the miraculous structure of human isolation and connectedness in full sight of the abyss. Which is to say, it’s creepy as fuck.
But, what do I know. I am twitter user weedguy420boner. You probably shouldn’t take anything I say too seriously. (If anyone wants to write a reply, please use “weedguy” as my first name and “420boner” as my surname).
While contrasting Walker Evans’s photographs of the Great Depression with attempts to capture the effects of the 2008 crash—the former unmistakably stamped with the signifiers of poverty, the latter wrapped in a “well they don’t look that poor” hermeneutics of commodity fetishism—George Packer brings up a parallel “confusion over nomenclature—is this a depression or a recession, or some qualified version of either? should it be capitalized or not?” I had a similar thought while flipping to his New Yorker article “The New Depression Journalism.” Would this be about the econmic disaster or the mental illness?
That equivocation provides a way to understand both the representational and political differences from the 1930s, when, as Packer chronicles, writers and photographers turned their attention to workers and their spirits toward communism. That engagement with the victims of economic catastrophe and a unifying mission of rectification stands in contrast to the bulk of the last half decade of literary production. In focusing on the supervillians of the crash we have stayed within the logic of celebrity culture. Packer’s thesis here is entirely correct: this depression has re-entrenched individualism, both for the criminals whose floggings we gleefully attend (if possible— the underprosecution of the responsible parties remains scandalous) and for those ordinary people who will try to weather a depression without the benefit of strong unions, work programs, or even a belief that government exists to ensure the basic welfare of its citizens.
It is a lonely depression, one you can’t see in the pictures. (For an effective visual rhetoric we have to turn away from the people to architecture in decay; urban landscape photography already expects a permanent homeless population so the presence of very poor people wouldn’t demonstrate anything historically specific. The buildings, though— we never thought we would see them like this). The depression exists as a social fact but not a public one, not something we share, reflect on, rally around or against. This is shocking, isn’t it? That one of the defining collective experiences of a generation drives us not to band together but to see ourselves, tragically or heroically, more alone? The linguistic slippage is illuminating: the optics and experience of this depression are more like the mental illness than the inter-war historical period.
The affective notes of depression register throughout Packer’s literary tour. Being unemployed leads to “numb terror,” “a pervasive sense of instability, of wasted talent and unspent energy, and the unemployed can’t help blaming themselves.” While the Great Depression created more material devastation, “the new depression seems to have produced less hope.” The most eviscerating summary of what it means to be an unemployed American comes from Barbara Garson’s Down the Up Elevator: How the 99% live in the Great Recession:
“The companies that wrote us off as workers now write us off as consumers. If you’re not a worker, not a consumer, and you don’t earn significant income from investments, then you don’t have much of a place in capitalist society….it can’t be pleasant to live in a country whose elite have no regular use for you.”
This is an astounding analysis. Capitalism is spitting out Americans like sunflower husks sucked clean of seed and salt. Neither worker nor consumer, they—ordinary Americans—are trash.
Garson captures the clearest image of what it means to be a person in a capitalist society. The competition between management and labor over wages and benefits is not the fundamental problem. It is that capitalism is anti-human. Capitalism’s success destroys the capacity of humans to exist. Capitalism is not the competition between groups of humans generating ideas about how best to solve material problems. That predates capitalism by thousands of years and, if humans survive capitalism, will exist long after it. Capitalism is a competition to see who can produce more capital. It is a successful meme because its reproduction is central to its operation.
It is not, however, a coordinated system or community concerned with overall optimization. Local actors—corporations and individuals—pursue the reproduction of their capital without attending to the maximization of capital across all actors. Thus a fractional increase in the capital of the wealthiest individuals is good, in terms of satisfying that actor’s contract with capital, even if it comes at the expense of the massive loss of wealth across all other actors (for a full citation, read any newspaper between 2009 and the present). If capitalism constituted a community it wouldn’t work this way. We would be concerned with total capital, not local maximization. And while it is easy to separate the idea of “building a better mousetrap” from capitalism— cavemen were doing this; monkeys do it— it is impossible for capitalism to agree that some actors should curb their profit seeking for the sake of allowing others to benefit.
It is difficult to imagine a lonelier way to see the world than that offered by capitalism, or a more lonesome outlook than what it leaves for those it has abandoned. Creating jobs in a populace ravaged by such a virus is not enough. Give a man a job, and he will eat (but not go to the doctor) until he gets laid off again. Enmesh him in a support group with other men and women and maybe you can transform his private depression into something less than invincible.
The similarity between clinical depression and the collective mental state of America’s workers— which is largely the sum of millions of individual cases of depression— make me think that in order to accomplish something more than a series of “I’m too tired to care” policy compromises, we first need to address how a lot of people feel like shit. We need to treat people like people, not like machines for performing labor. (If you agree with that statement—*Jeff Foxworthy voice*—you might be an anti-capitalist.)
People have mental/emotional lives that can get fucked up, and chronic unemployment is a really good way to do that. Finding a job is hard; finding one while staving off depression is really fucking hard. We should address this part of the problem the same way we would any other case of mental illness. First, recognize and destigmatize the situation. “Having been poor is no shame, but being ashamed of it is,” as Benjamin Franklin wrote, and that should apply to both the economic and psychological valences of depression. Second, the state needs to provide medical support. It just does. This is what a state is for. If you are broke and uninsured and depressed, paying for trips to a doctor and prescriptions is impossible. This step, then, is a policy recommendation: if you, think tankers and congresspersons, really want people to do all the entrepreneurial innovation that is said to spring up during times of unemployment, give them the hand up to feel like themselves. A prescription for Xanax is cheaper than unemployment insurance.
Lastly, the most difficult step: social support. The former recommendation was a “liberal” request for the state to do its job. This is a “conservative” request for people to help themselves. But this should not be just the individual striking out to find a support group. It should be us pulling others into our existing groups. It doesn’t have to be any kind of group in particular, just something that offers the type of experience we get from being together. Happy hour crews, bike parties, bowling leagues, soccer teams, churches, punk bands. These all work.
Garson mentions some unemployment support groups, and only from one of those members is there a hint of anything revolutionary. I think such groups will neither lead to revolution nor that they necessarily need to be based around shared unemployment or politics. The important thing is simply that you feel like part of a group. This is something that humans do; we are gregarious animals. And, contrary to what many conservatives have said about our little platoons, this is exactly what capitalism has stripped from us over the last two hundred odd years. They say capitalism pushes us into groups, the corporation being the largest specimen but our day-to-day team of coworkers the most important. This is true in a way: capitalism forces people next to each other. But in only comprehending them as labor machines, it will tear them apart just as quickly. Capital does not see a human group, it sees only the reproduction of capital, and the degradation of what is perversely called “social capital” is an accepted externality of that process. Like all environmental degradations, it slowly increases the cost of doing business, but there are always slightly less battered groups to despoil. Capital will keep moving toward them.
To recover from 2008 does not mean merely returning employment statistics to where they were. To fix the symptom without addressing either the structural damage or the cause would be prologue to a relapse. It means returning America—and elsewhere, though Packer’s focus is on books about the American experience—to a place where people live in groups that are not organized as expedients for capital. The result, hopefully, would include both mental health and economic benefits—the best way I know to find a job is through my circle of friends—but the real benefits would come in reconstituting the only way of life that can resist the liquidation of the human psyche by capitalism.
Understanding the present disaster means understanding what has happened to people. Capitalism cannot understand that cost, and so it periodically and indifferently induces a crisis to be repaired at the expense of social capital. We, however, can comprehend that loss by comparing our situation to more radicalizing catastrophes, as many of the authors in Packer’s survey do. We can see that our political process has hardly bothered to notice the precipitous drop in the value of ordinary people. And, what I think is simple enough to be scoffed at but warranted enough to be worth saying, we can help each other out of this depression. The question that points us back to Walker Evans and company is whether we do it for the benefit of capital or ourselves.
Tim Wise’s concise piece on white privilege and the Boston bombing has been making its rounds in my social media circles. Good thing, too: while I am loathe to say anything is “required reading” this deserves it. Today, however, when some of my coworkers were making a similar observation— that “we only call it terrorism when brown people do it”— I felt that such an account of white terrorism could be improved.
What is the defining feature of terrorism that makes it different from other types of violence? My understanding is basically that found on Wikipedia:
“Terrorism is the systematic use of terror, often violent, especially as a means of coercion…. Common definitions of terrorism refer only to those violent acts which are intended to create fear (terror); are perpetrated for a religious, political or, ideological goal; and deliberately target or disregard the safety of non-combatants (civilians).”
The distinguishing feature isn’t the color of one’s skin (obviously, right) but that it is systematic and carried out as a means to an ideological end. The latter almost implies the former: it is hard to be systematic without an organization, and it’s hard to sustain an organization if it is not dedicated to a meaningful goal. (It’s worth noting that the figure of the Joker in the Batman universe explores such a possibility. The Maryland snipers of several years ago would fit as well). By this definition of terrorism, which I use to separate terrorism from violence, someone like the Sandy Hook shooter is not a terrorist. They are mad, which is to say there is no sense to their violence. This is the cost of living in a fallen a world. There is senseless violence.
This has had the historical consequence, however, of giving many people with brown skin a reason to be terrorists. The simplest reason for terrorism would be that you are a citizen of an occupied or colonized nation. You do not have the military capability to win a war outright, but you can wear down the occupier’s political will to colonize. Anti-colonialism is a psychological war for which terrorism is the most rational response, and history shows that it works pretty well.
To be clear, I am not suggesting that non-white people are terrorists because of something about their race, or that all non-white people are terrorists. I am saying that colonialism creates terrorism, and that in recent centuries colonialism has most notably been exercised by white people upon non-white people.
Of course, as Wise documents, there are many white terrorists, but I think he provides an incomplete list. From my definition of terrorism, the U.S. military is a terrorist organization and I do not think it is an accident. They are merely using the cheapest method of subduing a resistant population. They would be remiss not to take advantage of the potential cost-savings.
I would say that the disparate white citizens who have taken it upon themselves to bully and attack people of color are not terrorists, except that they actively connect with each other to reinforce their sense of purpose. Social media is a good way to connect with fellow racists. So is Fox News. Once you participate in a community that shares a vision for how to terrorize a population and act on that vision, you are a terrorist. White terrorism is so ubiquitous that we don’t even see it; we skip right to Timothy McVeigh, like it only counts when the victims are white, too.
Not all spectacular violence is terrorism and not all terrorism is spectacular. Clarifying what terrorism means can prevent all kinds of problems.
So I know Wreck-It Ralph came out awhile ago and it is customary to cover culture stuff when it is fresh. Whatever. I know I’m not the only one who watches movies months after release. You know who else doesn’t catch the midnight showing of The Hobbit? The people this article is about: youngish parents.
It has become de rigueur for children’s movies to break something off for the grownups in the audience. Usually this takes the form of snappy references to Star Wars or The Stone Roses or Iran Contra or other stuff that only old people remember. That’s nice and all— better than the psychotic schmaltz you could be getting— but Wreck-It Ralph, like the best children’s movies, offers more than that Family Guy Lite patter. It has a large and coherent anchor point for parents to see themselves in a relationship with the child and her world. Ralph is a clumsy, conflicted, naive galoot willing to make any sacrifice for a little girl. That’s me in real life.
To be clear, Ralph isn’t a parent. He’s just a guy. That sets the parent-child dynamic of Wreck-It Ralph apart from other films in its category. Finding Nemo, Brave, Lion King, Kung Fu Panda—these all have actual parents in them. But they are stories about being parents, and with the exception of Brave the parental figures present unconditional love but no personal change. Wreck-It Ralph is a story about becoming a parent. Ralph starts out as guy who lives in a dump and is reviled by the fancy (insomuch as they don’t sleep in trash) folks next door. Now this I can relate to. I was that guy with the recycling bin overflowing with 32s of High Life, throwing tvs out a second story window just to see them break, and using Bud Light boxes to cover myself when I fell asleep on the porch. I had a stupid repetitive job that no one celebrated. I wanted to be welcomed into adult society but only on my own terms.
Ralph strikes out to earn their respect by winning a medal in one of the other video games. He sort of succeeds in a Halo-esque shooter, but as a result triggers a series of mishaps that land him in Sugar Rush, a cutesy racing game. There he befriends Vanellope von Schweetz, a punky outcast. While Ralph’s wrecking skills made him the bad guy in his own game, they are perfect for helping Vanellope. Together they break into King Candy’s car factory to make a custom racer and when it turns out Vanellope has no idea how to drive, Ralph pounds out a course for her to practice on.
This is an image of fatherhood I can identify with. I’m not good at making stuff, my own life being the most extreme example. I can’t make an ideal world for my daughter, as the patriarchs of Aladdin or The Little Mermaid— whose daughters run off on them, by the way— try to do. That idea is logically incoherent: there is no ideal world for a person that pre-exists their participation in creating it. But here’s what Ralph and I can do: hurl our fumbling selves against the world. Smash it up real good. Make a clearing for her to become who she is.
The most insidious enemy is King Candy, who convinces Ralph that it is really in Vanellope’s best interests not to race; that by doing so she will embarrass herself in front of a global audience and have her optimistic defiance crushed with a finality that mere exclusion never could. Ralph buys this line (it makes a certain amount of sense in the rules of video game world) and wrecks her car to protect her from herself. It’s a hard scene to watch.
But guess what: King Candy was full of shit. King Candy is actually the worst guy in all of video game history, a villain whose selfishness destroyed his native program and who has arrogated the throne of Sugar Rush by tampering with the game’s memory. What Vanellope thought was best for her really was best for her. And not just for her: when she crosses the finish line she breaks King Candy’s hack and restores all sorts of goodness to the realm. I believe any viewer can share the pleasure of this most deliciously just moment not in spite of it being a figuration of the revolutionary destruction of patriarchy, but because it is a figuration of the revolutionary destruction of patriarchy.
The lesson of Wreck-It Ralph is very different from that of its peers. Finding Nemo and company give you a “truth is in the middle” resolution where both parties learn to be a little more empathetic. Dad, don’t be so uptight. Kid, don’t be a little shit. That’s timeless stuff and worth repeating. But what if, contra Kurt Cobain, you think the figure of the father is hopelessly flawed and maybe being a dad is better?
“Dad,” at least as Cobain uses it, isn’t quite right— I’m not trying to go full ice cream for breakfast and bedtime is never. I understand that kids need authority to feel secure and to correct them when they do wrong. I’ve got no problem with that. There are plenty of questions where I think my moral intuitions are correct enough that I would be happy to impose them on my (or anyone else’s) children. Hell, I want to impose them on the world in general. At their core is the observation that the evil in the world comes more from grown men than little girls. Wreck-It Ralph agrees and offers a solution so simple even a dude who eats pizza out of the trash can do it. Smash those fuckers and their bullshit empire.
Maybe if I kept up with staff changes at SNL I would know the cause of their detour into reactionary humor, but as a casual consumer all I see are the effects: sketches where the premise requires you to laugh at minorities. I’m not talking about merely unfunny sketches, like “what if Quvenzhané Wallis was the Pope,” which could be eliminated from the schedule by tweeting the premise and seeing it sink unnoticed into the fathomless sediment of Bad Jokes on Twitter, but ones that trade on putting down some less powerful group. A couple weeks ago there was one where the premise was “what if a machine acted like those lazy black Starbucks employees.” And this week there was a “She’s Got a Dick,” a mock movie trailer for a romantic comedy in which it is discovered, get ready to laugh at this one, that the chick is a transwoman.
(The title of the sketch used in the show does not include “Dick,” by the way. It’s dick with some letters replaced by symbols, which is maybe the cue to adult viewers that this sketch operates on the intellectual level of thirteen year olds giggling away their discomfort when the gym teacher hands out a diagram of a uterus and announces the next six weeks are sex ed.)
The sketch gets many things right—everything, in fact, except its refrain. As a spoof on romantic comedies it is does what satire should do: point out the predictable conventions of stagnant cultural forms, right down to Fred Armisen’s deus ex Eugene Levy. Bolting that set of subtle observations onto the mockery of transwomen—the “joke” of which is captured just as well in a video honestly titled Transphobic Techno—is something that SNL needs to rethink.
In case I am being too subtle, I think it is inappropriate to treat the existence of transgender women as a joke in itself.
What’s more, they’ve been beaten to the joke, not just by YouTube bottom feeders, but by It’s Always Sunny in Philadelphia, which has a long plot arc involving Mac and a transwoman. Here’s the thing though: It’s Always Sunny does a much better job. Mac is torn between his oafish impulses to reject her because she has a dick and to embrace her as an ideal partner because (in his distorted view of himself) she is a fellow hardbody. Mac struggles with the ideas that she is radically different and essentially similar, where “essentially” points to the joke: the truth of Mac’s character is delusional vanity, and so the mode in which he perceives the dignity of a person is through their physical appearance. Though the characters refer to her as “the tranny” throughout, the end of the plot arc, in which Dee gives the woman and her husband a baby she was carrying as surrogate, shows what it always does: the principal characters are despicable degenerates. They are unnaturally awful, not transgender people.
It is a damning sign when you have been out-classed by It’s Always Sunny. Next stop on the road to rock bottom: Seth MacFarlane’s Roadside Rape Joke Emporium.
The SNL skit never asks us to laugh at transphobia as a malignancy. It treats transphobia as a normal way of seeing the world, something we can all share in. Why or how does it get away with this? I think the answer can be found by comparison to the types of appeals made in ads for gay marriage rights. Some of those commercials, as commentators have noted, feature straight couples talking about friends and family who are gay. They do not show gay people. An alternative appeal speaks directly to the equal rights of gay citizens. The former method is more effective in swaying conservative or borderline voters to support gay rights. Why?
An obvious answer would be that such voters do not like seeing gay people, and there is probably some truth to that. That wouldn’t fully explain their behavior, however; there are plenty of true homophobes who have continued to vote against gay rights. I think a better explanation is the type of relationship the viewer is asked to imagine in each commercial. The appeal to rights per se speaks in the third person; that is, it speaks about people you do not know, asking you to extend rights to a populace you know only through imagination. The appeal via the family member speaks in the second person about someone you know intimately. Everyone knows—loves—someone who is gay, right? That is a powerful call to the conservative worldview. I suspect that SNL is trading on the assumption that we don’t know someone who is transgender. Well, I do, so fuck you. More broadly, reactionary humor assumes that we don’t have second person relations to whatever group is being mocked.
Philosophers use the difference between a duty to the second person and to the third person to describe the distinction between ethics and politics. (Simon Critchley’s Ethics of Deconstruction lays this out really well). In Righteous Minds, Jonathan Haidt finds a correlation between each type of moral reasoning and one’s alignment as conservative or liberal. If your values are strongly based on local groups—the second person—you are more conservative. If they are based on the imaginative leap to people you do not know, you are more liberal. Porting experiences back and forth from one sphere to the other is the contest of politics as we know it, which Corey Robin describes in The Reactionary Mind. Conservatives try to transform the hierarchies found in second person experience (the workplace or home, for example) into a model for governance. Liberals advocate for the essential dignity of all humans, an equality which has been variously diminished by historical accidents and which we have the power to undo.
I bring in the second person/third person distinction not to say that conservatives are doing politics wrong (or that liberals are doing ethics wrong) but as a frame for reading how SNL—or anyone else—asks us to view a minority. She’s Got a Dick frames an empathetic relation in the third person so as to mock it. We aren’t invited to identify with her suitor (Justin Timberlake). The spoofiness of the bit distances us—we’d be a dope to identify with a romantic vision of a transgender woman—and so gives us the worst of both worlds. It’s not as classically reactionary as the Verismo commercial—a second person relation to an antagonistic other—but the result is the same. I wouldn’t expect a comedy to make a dispassionate appeal to human rights, but they can invite us to identify with situations where people who are not straight, white and affluent are treated empathetically.
Or, at a minimum, avoid looking like a media organ for Rick Santorum. For god’s sake, ABC Family is already doing this. SNL needs to get a fucking clue.
Like Marx’s descent of freedom from the Asiatic despot to the Athenian governors to the French Revolution—the proofs that one man, some men, and all people, respectively, could be free, and incidentally a trajectory headed from the barbaric East to the center of Europe—the proposition of Google Glass is to make a qualitative difference by delivering much more of the same. The laptop freed us from the desktop. The iPhone freed us from the laptop. Glass aims to free our hands from the iPhone. Set against Marx’s fable, it’s hard not to feel that the history of our time is somewhat less grand than it could be, but I suppose our descendants will feel differently about the generation that melted the last of the Arctic ice.
While easy to poo-poo, these differences are not minor. Laptops and smartphones, for example, have contributed immensely to the dissolution of the barrier between home and office. For the consumer, they are good recreational devices. For management, which exists to lower cost the of labor, they are a miracle.
Will Glass improve on that? Maybe. By making it faster and easier to reply to messages, workers might be more efficient and more willing to do so. The value proposition of Glass, however, is not that it offers more or better information. That’s what prior generations of computers have brought us: more information, first in the lab, then the home, and now anywhere, anytime. We could hardly have more information. Glass tells you the same stuff Google has been giving us for years—who was in that movie, how do I get to the train station, do I have any emails—and does the same stuff we expect from smartphones—photograph and video capture. The only question in this regard is whose business model we’re looking at. If Google, then it’ll be giving us a bunch of ads in addition to what we want; if Apple, which seems more likely, then we get hardware that costs 1.5x that of the competition and gets changed every 9 months, ever so subtly, until at last one change makes all of your old docks non-compatible. Thanks again for that one.
So we’re not getting anything new there. What does Glass give us? Our hands and our eyes. That’s the pitch, and it makes for great advertisements. I love it sickly—it is the same visual rhetoric as the old iPhone ads, minus the frame of the phone itself. And it is a perfect dialectical realization of the camera itself, the mechanical eye pointed by hand, to advertise that now you can use your own eyes and own hands to reach that world. All of this is in Google’s talking points, by the way, under the section about being more human. It’s just so perfect I had to take a moment to recognize what they’ve done.
Presumably the people at Google know it is stupid to talk about being “human” in essential terms and are using the word, as it always has been used, as a form of advertisement. At least it’s nice that they’re essentializing the human to sell gadgets instead of enslave continents, but the worms have been out of the can for a long damn time as far as technology being a part of what it means to be human. The hands and the eyes are a good place to start, though. Engels, for example, makes a big deal about how hands turned us from apes to humans, and the world as we see it is easily treated as the record of truth (“eye witness”). But why do hands make us human? Because they hold tools. And the eye is completely bound up with the technology of representation, of looking at an image and reading it for similarities to other things. Our eyes and hands were deep in technology tens of thousands of years ago. So don’t fucking talk to me about how Glass makes us human. Give it to me straight: tell me how it’s going to make my life easier. I’m not paying for a nice ontology, I’m paying to get back my labor back.
That’s what the pitch comes down to: minimizing the time you’re futzing around with your phone instead of hugging your dad or daughter or whatever it is you want to do but have to sacrifice to stay at the top of your professional game. That’s great. I’d buy that for a dollar. And I agree that it’s a matter of when, not if.
The question then becomes how it changes us. What strikes me about Glass is how it captures the functionality of the iPhone without the fun (haha, I’ll invoice you for that one later, Apple). As mentioned earlier, it seems great for messaging, search, location data, and image recording. (The only caveat being getting the voice control better but again, that’s a matter of when). That’s not exhaustive of what smartphones are used for or what they have meant. Smartphones are a huge gaming platform. On one hand that means they have propped up the social gaming bubble even as the Facebook web portion collapses (following the pattern of consolidation from Call of Duty N+1 to N+1 Ville and Angry Birds N+1). On the other hand, it has introduced organized play to previously non-gaming audiences in a way that I find beautiful. I think it’s fucking great that my mom tells me about how she is destroying my aunt in Words With Friends because I love games. I think gaming is good for us. (And for that matter, I think it’s good for a lot of species that aren’t immediately included in “us.” My dogs, for example). That’s why I make games. I won’t go on about why games are rad but presumably you know since you aren’t an unimaginative clod.
Will Glass support gaming in the same way? I don’t mean in a technical way—it’s not a matter of whether the device can handle it—I mean in a technological way. Will we use it for gaming. My prediction is no. Glass looks ideal for location based gaming, which is to say, it will fail as a gaming platform. Why has location based gaming failed to ignite? For the exact reasons Glass makes sense: because the real world is actually pretty compelling. When I’m on the ferry out to Alcatraz I don’t need to be screwing around on my phone because my surroundings—the stuff immediately accessed by my eyes and hands—offer a plenitude of stimuli. If Glass does what it’s supposed to, it will make gaming with it look stupid.
Additionally, Glass looks like a bad gaming platform to me. We are good with our hands, and with the sensitivity of a good controller we can transform thoughts into satisfying game inputs. I don’t see that in Glass because it is designed not to require rich touch input. Again, to the extent the Glass designers are right about the importance of hands for human expression—so important, in fact, that learning sign language is one of the very few ways to make a child demonstrably smarter, and when people lose gestural abilities they also lose verbal ones—they have designed a device that will not be good for gaming.
Which brings us to a dilemma. Will smartphones persist because of the rich gestural world they offer, one that diverges from the real world and entices us into gaming? Will Glass replace phones and gaming (as a practice and an industry) go into a recession? Am I totally wrong and we’ll be playing games on Glass that are aren’t yet imagined (well, I’ve got a few ideas) because the real world still sucks? Or did I put the cart way before the horse and Glass won’t matter at all? For any but the last option, the insights and intentions of the Glass design team provide a starting point for thinking about how such a technology will inflect being human in the near future. And, for those who care about such things, the relationship between labor and gaming will continue to tighten as they evolve on the same platforms.
In a previous post I tried to describe why social networks are good places to create, steal, and redistribute the fruits of creative labor. Some friends said it felt incomplete, like a chapter in a larger work, and this is the result: an additional, longer, slower exploration of how I see social networks. It’s still incomplete but this time it’s by design. Even though I think 95% of discourse about social networks/media is absolute bullshit, I’m going to lay out my analysis on the off chance it is valuable to someone. I thought a good next research question would be alternatives to the double loop of production and theft I described before.
Before getting to that question, however, it’s worth pausing to consider my (and possibly your) reasons for caring about this. While it may seem self-evident that getting ripped off is bad, anytime human interests appear self-evident I take a step back.
Why I am against the joke stealers and other sundry lame asses:
(By the way, if “creative labor” seems like a bloodless corporate catchall, I am using it 1) to point out this isn’t just about dumb jokes and 2) to keep afloat the concept of labor against the efforts of union busters and jack off work revolution gurus. In Cornel West’s The Ethical Dimensions of Marxist Thought he writes—this is a throwaway remark but it made a deep impression on me—that labor is the unacknowledged bond of our mutual need. Labor is separate from work; work is labor shorn of its sociality so it is uniform and quantifiable. What we have to offer to each other is other than work. Labor points to the human capacity to create, either in a way that is liberating and expansive or in a way that is repetitive and soul-crushing (Aesop Rock’s Labor Days nails this). And it is also at this point, before labor has been determined as a specific type of work or play, that it offers a point of commonality—what Marx organized as the International—beyond our local professional interests.)
Anyway, I’m on one side of this issue for a number of reasons, all of which I think are good but not all of which have the same ethical standing. I think being loyal to one’s friends is a good quality but it can’t adjudicate claims between competing groups. Other than that, there are concerns of self-interest and concerns of justice, both of which need some place in your practical philosophy or else you become a doormat or a sociopath (sorry, “libertarian”). If you want to disagree with me from first principles, hopefully I’ve made it easier for you.
If you agree with any or all of those reasons you too might want to find some escape from the labor relations that social networks facilitate. Here are a few models of socially networked creative labor that cannot be captured in that cycle.
Obscurity is an alternative to the paradigm of online privacy. Privacy is about giving a bunch of highly structured information to one powerful entity and asking them to conceal it from other entities. Obscurity is about making that information less legible so you don’t depend on a guard dog. There’s a good article at The Atlantic on obscurity and a short academic piece offers the following definition: “We define obscurity as follows: Information is obscure online if it exists in a context missing one or more key factors that are essential to discovery or comprehension. We have identified four of these factors: 1) search visibility, 2) unprotected access, 3) identification, and 4) clarity.” The theorizations of obscurity I’m familiar with are mostly concerned with solving the problem of protecting you from Facebook and Google (who sell aggregations of data specific to you) rather than from other individuals (who sell very small chunks of data that don’t have anything to do with you specifically). In fact, ripoff artists want to make it look like you had nothing to do with the origin of this data—claiming it as original is what makes it plagiarism. (I hadn’t realized this before but there is a very neat symmetry between big data and the entrepreneurial plagiarist).
But as a way of thinking about the protection of data from would-be pilferers, obscurity fits. For an example of missing clarity, check out Something Awful’s FYAD forum. Many of the funniest people on Twitter came from FYAD, where they went largely unnoticed because the forum is completely fucking impenetrable. You can see a lack of identification in Reddit’s anti-doxxing ethos, central to creating a safe place for rapists and child pornographers. Content on these sites is not highly visible, protecting it from theft or interdiction simply because of the effort required to get it.
Both SA and Reddit are pretty easy to access. Reddit is free, SA costs a one time nominal fee, and once you’re in you can get any content you know how to find. Another model is to make content hard to get because you have to subscribe to the person emitting it and there is no network supporting (and recording, indexing, publishing) the transmission. I’m thinking of the newsletter I get from musician/musicologist Ned Sublette. I met Ned when I was giving a paper at a conference at UCLA and he was the keynote speaker. At the end of his talk I went up to him and said something about how he was cool, and he handed me a clipboard to sign up for his newsletter.
That was a few years ago. Since then I’ve been getting Nedslist emails with links he finds interesting (along with his commentary), promotions for shows he’s playing, and first hand reporting of what he’s seen in his travels. On one hand this is just a throwback to what we did before Facebook, or, hell, before the internet at all. It’s the family digest tucked in with the Christmas card. On the other hand, it is a practice he has maintained in parallel with Facebook and the rise of social networks because it creates a different kind of distribution of his thoughts and writing. In this model, there is no network, there are just terminals (and most of them are supposed to be read only). While there is certainly data aggregation and analysis when it hits my Gmail—those well-placed ads for BP disinformation next to a story from Ned about Deepwater Horizon—replacing a social network with an email client means messages go to explicitly intended recipients rather than an unseen public. Access to the list is obscure because you have to run into Ned, and the network it creates has different storage, distribution, and read properties than what we think of as “social networks.”
Local Social Networks
We can get even smaller. If you just want to joke around with people you like, why not use group SMS? Or TinyChat, where the richness of video introduces a layer of obscurity (people doing things that are easily human readable but less so for machines). Not only can you avoid people trying to steal your shit, you can avoid the intrusive and sometimes abusive people that can come at you on an open network. (I’m a dude so I don’t get much of this but I know it is nearly constant for women on Twitter). Much of the our knowledge of Jim’s Hole comes from text messages between @dogboner, @fart, and @degg. I’m calling these “local social networks” because they connect multiple parties but do not exist independent of the participants. If there is an established term, please just imagine I’m using that instead. This differs from the Nedslist example in that each person in the network knows everyone else and there is no barrier to inclusion. It’s also nice that group SMS apps keep popping up so it’s easy to avoid the hegemony of a single provider if you want.
All of those are ways to shrink from the exposure that comes with writing to a social network. The classical way to do this is to write to a publication that has protections in place and an institutional interest in enforcing them. (Twitter and Facebook’s interests are against protecting authorship). This is professional writing. You protect your best ideas by not publishing them to a social network and/or by making them more complex than can be captured in a tweet or status update. Professional writing is great work if you can get it, although it comes with its caveats: the devaluation of quality writing by the can’t-tell-it’s-not-butter social media substitute, the utility (sometimes requirement) of a social media presence for landing such gigs, and the scarcity of available positions. The longer a written piece is, the harder it is to steal without being noticed, but it’s also harder to develop such pieces against the barrage of micro-updates and to maintain the interest of a readership that has trained itself to grow bored.
Professionalization points to the unique power of money in our legal system. Once money becomes involved the harmless act of “borrowing” a joke becomes actionable; it becomes theft. Being free is core to social networks’ user adoption (despite what your uncle posts), and it is also core to spinning off content that can be repurposed without infringing on capitalism’s sacred cow. One way to avoid stealing, both by making content less accessible and invoking a different jurisdiction, is to charge for your work. It has become trivially easy to publish in some way. You can hardly follow anyone on Twitter without hearing about how you should buy their new book/podcast/etc. As a tactic, the buying and selling of self-published work risks falling into the insipid sloganeering of localism. I wish I didn’t have this hang up, because I think local production/consumption is cool and intelligent, but…fuck. People make it sound so fucking twee. I’m trying to buy some fucking tasty vegetables, not get an overpriced high five. So I’m not completely comfortable with this trite solution. Plus, two points: more fervent consumerism is never the way out of a structural problem of capitalism, and even if we’re all buying and selling each other books it’s not a zero sum economy. The intermediary platforms—Amazon for Kindle publishing, various companies for print—make it less than zero sum and centralize the accumulation of wealth in this model. (The truly zero sum local economy is Utopia, btw).
That said, I do think more people should more often charge for their work. This puts the onus on creators to make something great. Figure out what you have done that is good. Set it aside. Polish it. Sell it. Putting out jokes on Twitter is like leaving your bike unlocked while you run into 7-11. The better the bike, the more likely it’s going to get stolen. (After my nice bike got stolen in college I got a free one from a trash pile. Never locked it up at home or anywhere else and it never got stolen. Eventually I just abandoned it on campus for someone else to take). Twitter and social media in general feed on a lack of impulse control. Learn to be better at controlling yourself. Be the kid that waits five minutes and gets two cookies instead of the little shit that eats it right away and doesn’t understand why the cookies are gone. The point isn’t to buy your way out of the problem but to buy fewer and better things.
In addition, this evaluates creative labor to some non-zero number and makes the case for a harm when it is misappropriated. Dan Ariely argues that Apple fucked up when they let free apps into the App Store because it devalues everything else—which I think is empirically true—and aligns market forces behind shitty free games rather than expensive great games. I say this as someone who makes those free games (although I think the games I’ve worked on are much better and more original than the rest of the crop). I have played most of the games that have appeared on the top 100 grossing chart of the last year. Most of them are awful. But if you make your game free to play you will get, on average, 7x the downloads compared to a paid app version of your game. Consider that the advertising cost of an install is well north of one dollar and you see why it is important to reduce the cost of acquisition by making games free. If you use a service model rather than a commodity model for monetization, you can also make a variable (and generally much larger) amount of money. (The difference between a service model and a commodity model used to be the difference between a human and an object. You paid humans for services, you bought objects. Now it is much simpler: a service model means you pay many times, a commodity model means you pay one time.)
By allowing free games, Apple has increased the volume of money moving through the App Store (which is their business interest) and permanently damaged the quality of games to be published on their platform. That’s what caused the first video games crash, but whatever. That can’t happen again. Oh, unless you count Facebook web games, which are crashing because they are free, expensive to market, and awful to play. But surely that can’t happen a third time under very similar platform conditions.
What I was trying to say is that establishing a norm of paying some amount for something you like leads to better off-the-shelf products. And I think it is ultimately more economically egalitarian than the current systems. On Twitter, many people write tweets for free while a very few republish them for profit. On the App Store, a very few people spend large sums to support these games while most people get an unpleasant experience (and the talent goes to making more unpleasant games). Plus, you can always steal shit. Right now I would rather pay two or three bucks for a book from the Kindle store because it is convenient for me and I have a job. Five years ago I was passing around third generation scans of Zizek and Derrida because I couldn’t afford to buy anything. Don’t make anything secure. Just make it a little harder for the wrong people to get.
Oh, an aside: one time I tried to use one of the tricks Dan Ariely describes for making people buy more expensive stuff. Basically you make one thing stupid expensive so something similar but less costly looks very attractive. The consumer feels smart for getting a good deal and buys the thing you wanted them to buy all along! We did an A/B test on a game I was producing with maybe a quarter of a million people playing a day and the differences were statistically insignificant. Cool idea, I don’t doubt that it works in other contexts or that the concept is a good one, but slapping a TED talk into your product does not guarantee results. FYI.
All of the above require that you do some extra work to avoid getting ripped off. That sucks. The alternative, which is more or less the status quo, is to wage war on people who steal your shit or otherwise fuck with your crew. Reddit was outed as a child porn swap meet because a bunch of SA goons took it on themselves to document and publicize what was going on. The pilfered Guy Fieri menu became international news (that’s fucked up to say but I have seen it on U.S., U.K., and Canadian news sites) because its partisans made the effort to expose the stolen lines. That dude running one of the really racist Twitter accounts (I think it was Ghetto Hikes?) got freaked out when his real identity was published, again thanks to good people like @virgiltexas doing the work of tracking him down.
The limit of this approach is that much of what is gross about the social economy is not illegal. With some exceptions, like really racist shit, these people aren’t even ashamed of what they’re doing. Hell, they’re the smart ones! Getting rich without doing any work—isn’t that what capitalism is for? And it’s not easy to get the casual public (the people who went bananas for fake fake Guy Fieri menu) worked up about what they see as victimless crimes. It would have been absurdly easy to take the premise of the Guy Fieri menu, write all new jokes and avoid any controversy. Even with half the words taken from other sources, there were plenty of brave souls ready to step up and say that the plagiarists had made it better. While it will always be fun and honorable to attack assholes online, it will remain a guerilla war. The biggest adversaries are immune to these tactics because they are just “original” enough that all you can say is that they suck.
In a lecture dear to my heart, Gilles Deleuze differentiates morality from ethics. “Spinoza doesn’t make up a morality, for a very simply reason: he never asks what we must do, he always asks what we are capable of, what’s in our power, ethics is a problem of power, never a problem of duty.”
Like Deleuze’s Spinoza, I try to keep myself on the side of ethics. I don’t want to look down on people who do not share my preferences. I want to be, as Spinoza says, more powerful. Power, in Spinoza’s lexicon, is the ability to effect change. I want that. Here are the four maxims of my work ethic, which I have boiled down to a very small number of words so that I can remember them better.
1. Do good work. If I am doing something that I know is subpar, keep working until I think it is good.
2. Create an environment for those around me to do good work. Sometimes that means leaving people alone. Mostly it means interacting with them thoughtfully.
3. Demand that people around me do good work. Not in a dickish way. Just in a hey, this could be better because you are great and this isn’t yet.
4. Credit good work. It feels good to tell people they have done good work and it contributes to a virtuous cycle of them doing the same for you.
That’s it. I do these things because they produce the best outcomes for me, whether it’s my team at work or some bulldad I’m doing for fun outside of it. Feel free to borrow these ideas or not. There is no moral mandate behind them, just usefulness for making the world more like how you think it should be.
In real life I masquerade as a product manager at a game company. On Twitter, where I do most of my interneting, I am @weedguy420boner. One thing I love about Twitter is that I get to see great jokes before they become popular, which is to say, before someone other than the original author repackages and distributes them in a more consumer friendly way. There is an ethical point to be made about joke stealing, and a nuanced counterpoint on the difficulties of assigning authorship, but I don’t want to talk about either those. I’m not interested in the normative aspects of plagiarism. Rather, what I find significant is the labor model created by the interactions of these two groups made possible by social networks.
The latest example is the fake Guy Fieri menu at guysamericankitchenandbar.com. I thought it was really funny when @a_girl_irl and @dinkmagic did it a few months ago. I’m not surprised lots of people find it funny now—it is funny. But the version being circulated is also well within the criteria for what would be considered plagiarism by any university or publication. It doesn’t just borrow the premise, it lifts whole lines verbatim, and that’s plagiarism.
But by what measure does this matter? The original authors weren’t profiting from it. They did it because it was funny. The person(s) behind the knockoff don’t really seem to be profiting either, though it wouldn’t surprise me to see them incorporate ads or sell merch or try to parlay it into future work that does pay. Pursued with more vigor, however, this strategy of siphoning off jokes from Twitter does pay. According to this jack off, it’s big business. By purveying a combination of recycled jokes and awful sexist drivel, he has been able to build a profitable little empire of awful accounts with lots of followers. And he’s not alone: such “parody” accounts are so numerous they now compete like weeds fighting for a spot of sunlight in the wretched filth of their rotting comrades. They consistently take jokes created by funny humans—this step is referred to as “aggregating”—repeat them more or less verbatim, and build up a base of uncritical consumers from whom they can generate ad impressions.
Again, this is gross but not interesting; pandering is nothing new. What is new is the role of social networks, and particularly Twitter, in combining two discrete media systems in one technological environment: one system that creates free, public, top notch content, and another that distributes it away from those creators without their involvement, without even the friction of a hand off. It is as if workers in China were voluntarily making iPhones in their free time and leaving them in some predictable locations where another group, who just stumbled upon these piles of phones, would come pick them up and take them to market. This is the economy of creative production on Twitter.
After pickup, the goods are resold on Twitter, Facebook, Reddit, and so on down the line of commentators and aggregators, each adding some local flavor or even, in some cases, additional value. I would like to think that what I am doing here is synthesis of the sort you can see on the better sites—making something new out of the existing parts—rather than the simple concatenation you see on a for-profit Twitter account.
This economy is possible because social networks create the conditions for people to be very inventive. That’s just what social interaction is for cool people. Fun. Being funny and interesting and feeding off of each other’s talent at doing so. Twitter makes it easy to do that.
It is also very publicly transparent. Facebook is not and it is also a shitty boring claustrophobic dollhouse of a social network. Twitter is alive because it is possible to be—nay, almost impossible not to be—exposed to people and ideas that are wonderfully novel to you. This is how it generates creative value in a way that can be captured by third parties for recirculation: a group of people doing what is most interesting about humans in a system where it is very easy for others to cherry pick from their discourse. I believe this mode of creative production is unique to social networks. Or, since mediated plagiarism has always been possible, perhaps it is just the speed with which it happens that breaks the quantity-to-quality sound barrier into a new modality of production.
Also of note is that the value being created isn’t being captured by the system itself. Any monetary value created for Twitter will depend on how well it can make money off of user activity; until it solves that problem, it doesn’t really matter how good the jokes are. By contrast, the quality of Facebook game matters a great deal to Facebook: they take 30% of transactions with third party developers and ~18% of Facebook’s revenue comes from games (more if you include impressions served on game pages; way more if you include the ad spend game developers pay back to Facebook’s protection racket). Facebook is still making a shitload of money from serving ads, but the games money is very, very important to them. Maybe Twitter has a way to appropriately tax bullshit accounts that reap the profit from the creative value Twitter has midwifed into the public. I don’t think it does (yet). To again put this in juxtaposition with classical paradigms of production, it is like building the factory for someone else’s workers. You can make money if you still control the vending machines, but the workers—you, the consumer of social media—better be buying a lot of Snickers.
Actually, it’s even more than that, because Twitter also provides the distribution channel. The factory and the store front are sutured together, yet operate virtually in parallel, like some kind of Looney Tunes condensation of the manufacturing process.
What I’m talking about is not crowdsourcing. Crowdsourcing is when a bunch of people can perform a task because it is menial and/or can be made more accurate—more valuable—by increasing the sample size. I am talking about excellence, which happens through the interaction of talented people (and there are many talented people in the world and on Twitter; too many for me to even know who they all are, much less follow them). It is fair to say, however, that Twitter provides a tool for crowdsourcing the determination of a joke’s popularity. There’s a nice circularity to it, and one that is important to commercial but not, for lack of a better word, artistic success: if a lot of people like a joke it is good because then a lot of people will like it. This is a good test if you cannot tell what is funny on your own. But you can look at the numbers, copy/paste what seems to works, and post it somewhere that you can start collecting ad revenue. Hey: you are now a social entrepreneur!