If there are such things as wizards, this must be the apocalypse

hunger games catching fire, catching fire gif, hunger games gif, hunger games odds in your favor gif, odds are never in your favor, odds are never in your favor gif
Standard

When Harry Potter and the Sorcerer’s Stone hit movie theaters in 2001, it was abundantly clear that more was to come. Nevermind that aficionados of the book series would only be satiated by a film series covering the whole of J.K. Rowling’s wizardry corpus. The simple fact that the first movie sucked, relative to just about every other blockbuster hit of ever, left much to be desired. It also paved a long runway for multiple directors to taxi up their aircraft, shove the throttle forward and throw the viewers so far to the backs of their seats that passing out would be preferable to the increasing pain being felt along the journey.

This is nothing against Chris Columbus, who directed and executive produced both Sorcerer’s Stone and Chamber of Secrets, and produced Prisoner of Azkaban with director Alfonso Cuarón. Had Columbus not been director and ep of the first film in the Potter franchise, the entire trajectory might have been all sorts of different. That’s because, to win the job, Columbus re-wrote the entire script, which Columbus admits himself was already brilliant.

If the beginning of the Potter saga established a lengthy runway for one series, it arguably forced multiple offshoots for those writers, directors and producers hoping to capitalize on the emerging young-adult fiction boom. The first such manifestation came through the Twilight series, though its dark and strangely animalistic, sexual niche was off-putting to a swath of the popular audience. If IMDB ratings mean anything, the “sucky” 7.3 earned by Sorcerer’s Stone vastly outpaces the 2008 debut of Twilight at 5.2. Sorcerer’s also raked in some $20 million more during its US opening weekend in 2001 than Twilight, even though both made it to the movies in time for Thanksgiving and Twilight’s revenues were ultimately inflated. By the 2012 conclusion of Potter, it had amassed a fan base willing to shell out nearly $170 million to see the show during that first summer weekend.

This weekend, Catching Fire, the adaptation of the second book in Susanne Collins’ Hunger Games trilogy, earned $161,250,000 making it the fourth most successful premiere ever, according to one reporter at the Wall Street Journal, and the second best of the year. The Hunger Games not only exists as the most smoothly paved extant runway to Potter port, but also the most obviously well placed. Believe it or not, this is shaping up to be a functional airport.

The difference between fiction and non-fiction is in what each medium excels at providing readers when their stories have concluded. For avid non-fiction readers, especially those of the scholarly variety, there are clear paths for engagement beyond the content. One can construct paths of further inquiry, whether due to lingering questions or gaps in the existing study or a provocation or inspiring hypothesis prompted by a particularly interesting train of thought. This can lead readers to write their own response or simply seek other points of view from competing voices. In essence, the point of non-fiction is to inspire further investigation and work.

With fiction, however, the motivation to move beyond a logical end is less prompted by rationality than it is emotion. There is no inquiry to be made because no additional material exists. Professional reviews of the content can never satisfy because they remind us that there is something outside the world created via the fiction, namely our world and its need to respond to any work in the same way we might for non-fiction. What readers needed when the Harry Potter books concluded wasn’t a forum to talk about Potter, but a way to keep living in his world. The film put into color all the imagination that had been instilled in devoted readers for years. As much as one can live within a fictional world, Harry Potter readers did. Regardless of the film’s effectiveness of capturing an accurate representation of the collective mindset, the films would still have served their cultural purpose – extending the relationship between the reader-viewer and the external reality.

Hunger Games is a poorly written trilogy with numerous plot holes. It has a horrible ending. From a literary point of view, there is no need to read Hunger Games. In fact, had Hunger Games come before Potter, I’d venture that it wouldn’t have moved the popular needle to one-tenth the magnitude it has. The one major plot element Potter had in its corner was a clear and relatable trajectory. Everyone, well a lot of people, goes to school. The entire concept of school is built on the framework of personal progression toward an ultimate end. By embedding the entire story within the reasonable bounds of the education system, Rowling took advantage of a collective conscience that needed little help imagining what life might be like if school were somehow different than it actually is. And like other fifth graders, Potter and his pipsqueak first year comrades needed time to develop from children into adults. This is why so many fans were happy to give the first film – if not the first three – a pass. The chief hurdle in adoption was the acting. But at that point who cares? Everyone knows another school year is just around the corner. Maybe, it’ll be just a bit better than the previous one.

By contrast, Hunger Games hit the market with fully formed characters. Peeta and Katniss had grown up and somehow survived into adolescence, but not without their own scars. In the debut film, viewers are introduced to one of the first encounters between the two – one in which Katniss is apparently homeless and Peeta has enough burnt bread to toss into the pig pen in which she’s sought temporary refuge. There are two faults with this that any post-apocalyptic novelist and producer should have considered. First, Katniss has a home. We already know this. So, why is she slogging through the mud outside Peeta’s bakery shack? Anyone with enough sense of self-preservation would understand that rain, mud and pigs breed disease. Therefore, swimming with them is not in your best interest. Second, in what post-apocalyptic world would any family forgo bread simply because it has been burnt? And, an even more base principle to be observed, what kind of post-apocalyptic baker burns the loaves of bread he’s been raised to bake?

Audiences are supposed to believe that, despite their flaws, Collins’ protagonists can and do emerge victorious from the world’s most dangerous game. That Harry Potter and his friends had been doing this since they were eleven years old is the only reason Hunger Games is at all believable. What the Potter saga did for its audience was to create imaginative minds convinced that fantasy is possible and victory is achievable. Screw football. I’m going back to college to play quidditch.

a gif making fun of college age quidditch tournaments

Speaking of both football and quidditch: Greg Gumbel everyone.

gumblequidditch

If you thought for a second that Harry Potter isn’t the cultural phenomenon it’s cracked up to be, explain to me why America’s favorite CBS football TV personality showed up on the Early Show to report on quidditch for muggles. Is this really what the entire sports world looks like without football?

Reality aside, Potter convinced a global audience that it’s okay to feel real emotion for characters, worlds and problems that don’t even exist. So, when the whole thing came crashing down as the credits rolled on Deathly Hallows, Part 2 last year, an entire segment of America was reminded that their belief in victory above all odds was based in young-adult fiction and that humans can’t really become wizards, even though the most sporting among us can quite obviously try.

Now, no one really believes that the human race as it exists today could ever evolve to include wizards, witches and an entire educational system devoted to their magical advancement. While the racist undertones pointed out by several academic analysts do reflect historical reality to a point – and even predict alongside modern post-apocalyptic authors like Veronica Roth a future in which social factions determine the pecking order – there is only so much magic a generation can believe in before it needs a fantastical world rooted in reality. And so, at the height of fantasy realism championed by all things Potter, enter Suzanne Collins and Hunger Games.

A world in dire need of a replacement for real life finally had its next story. One that didn’t begin the journey with elementary school children, but continued the narrative where it left off, setting of-age, average people in a reality that hearkens back to dystopian novels of old and bases itself in a plausible reality. Who cares that the books are terrible? The entire thing makes sense simply because it exists and makes sense for those who needed it to.

Like Potter, Hunger Games did have some stage-setting to do before audiences could fully buy in. Sorcerer’s helped create a reality in which multi-film series are required to accomplish this up front to make room for what’s to come later. Overcoming the growing pains of transitioning young-adult literature into a dark film for a mass audience is a necessary hurdle, but one that was overcome for Hunger Games by the fantastical framework through which the audience already approached the film.

After the awkward millennial love boat had been established and audiences had gotten over the fact that film moved entirely too fast by creating an incredible bad lip read of the whole thing, stuff quickly fell into place.

And, before we all knew it, last weekend arrived and Catching Fire was making so much money that theaters were opened at ungodly hours of the morning just to funnel through people (like me) who like to watch movies alone on opening weekend, at like 9 am. Hunger Games is a hit. Unquestionably, but remarkably so given that Jennifer Lawrence has taken roles in other films, something the Potter characters could never have conceived of and still struggle to overcome today. Before we draw this comparison to a close and dive into just why Catching Fire is one of the year’s best films, it’s worth mentioning that the finale of this saga will be split – like Potter – into two separate films. This, I think, could become a norm among films of the genre, meaning popular book-based dramas. Regardless of which Potter film you watched, there was always the sense that the films moved too quickly, especially after you’d watched how elegantly the seventh book was split in two, though even that left gaps. Critiques of this nature have nothing to do with neglecting to account for specific portions of their related books. It all comes back to what the audiences feels and needs from these films. The appetite will always be for more. Fleshing out every possible detail gives those viewers reason to believe in the worlds they inhabit through fiction while helping studios pad their pockets.

So, why was Catching Fire spectacular?

Jennifer Lawrence was in it

JLaw, as a couple of my favorite Grantland contributors call her, makes this movie. There’s a reason her face fills the first and final frames of this film. The choice to bookend with one of the generation’s most emotionally savvy and versatile actors was production gold. In the very first scene, we can assume Katniss has hit rock bottom in her struggle with games-induced PTSD. This assumption is incorrect, and director Francis Lawrence was challenged to create a film in which the weight of her depression is abundantly felt while giving the audience little reason to believe it will improve. Rock bottom is still a way off.

Likewise, Lawrence (Jennifer) was challenged as an actor to embody and even represent one emotional state, while displaying another. At the deepest levels, Katniss is two characters. First, and likely most neglected by the audience, she is a teenager who, through an act of familial preservation and self-sacrifice, was required to kill 22 people, most of them near her age. She did not enjoy it, and one could argue that killing herself was less an act of direct defiance than it was an effort to obtain psychological release from the realization that the reality of her world was far more hellish than she’d been raised to believe.

Coincidentally, depression plays a starring role for Lawrence’s character in Silver Linings Playbook, too.

The depressed teen wants nothing to do with the games or anyone affiliated with them, hence her almost automatic agreement with President Snow (played by Donald Sutherland) when he arrives at her home to threaten her and her family should she fail to convince Panem of the lie from one year earlier. The teenage Katniss wants nothing to do with revolution or rebellion, partly because she has no framework with which to interpret or understand that pending revolution was her doing in the first place.

The second Katniss – the viewer-friendly version – slowly internalizes the collective mindset adopted by the districts, feared by the government and stupidly neglected by the capital populace. This comes with help from Snow, her sister, the district killings during the tour, Gale’s beating and reaches a tipping point when Plutarch (played by Philip Seymour Hoffman, more to come) is discovered to be a double agent working within the government on behalf of the rebellion. Throughout the film, her fear and depression are supplanted, though never fully replaced, by an apathy whose end can only be realized when the government is placed in a position to fear its citizens in the way its citizens have feared the capital.

catching-fire-katniss-prim-lead

Lawrence was perfectly positioned to master such a complex character and convince audiences that it mattered that she, not someone else, played Katniss. There is little doubt that Divergent could fail to match the gravity communicated by Lawrence’s Katniss, if for no other reason than the person playing the part of Tris.

Oh, and so was Philip Seymour Hoffman

When they’ve seen a smart ass, most people know it. Do you remember the trailer for Mission Impossible III? It doesn’t matter that Philip Seymour Hoffman played Truman Capote in a film about the man or that his filmography includes some of the most decorated work of the last 20 years, not the least of these being The Big Lebowski. I will always remember Hoffman for using his cool, smart ass attitude to piss off and freak out Ethan Hunt. I was convinced that he could get away with whatever plan he’s concocted.

When I finally got around to seeing Money Ball, I found Hoffman once again playing the only character he could possible play – a belligerent general manager who showed little regard for anything anyone said – and I knew he had, in fact, gotten away with killing Hunt, torturing his girlfriend and ending the entire Mission Impossible saga.

You could analyze Catching Fire and accurately dispute this claim, but Hoffman appears to smile throughout much of the film’s duration. His coy and convincing demeanor do more than comfort Snow into believing he has an evil ally, they almost ensure the uneducated viewer is none the wiser to his true status as a double agent. With Hoffman around, the pending rebellion may actually have a shot.

The costume designer had enough sense to put a crotch pad over those peacekeeper uniforms

hunger games peacekeeper photo

Much has been made of Catching Fire fashion, with no shortage of commentary on Lawrence’s attire. And this makes perfect sense for a film where the plot relies upon the fact that the best fashion designer in the most outrageously fashionable city ever uses Lawrence’s character as his muse. But, for at least 10 minutes after swaths of faceless, capital peacekeepers descended on the districts, my imagination was immediately clouded by the thought that Lionsgate had run out of money and all constumes heretofore would be made of the kind of one-size-fits-all Spandex that leaves as little to the imagination as Nacho Libre’s sweats.

nacho libre gif, nacho libre jack black gif, nacho libre running gif, jack black running gif

Thankfully, and following the path of better films before it, the mass-produced peace keeper suits did include a handy crotch pad, though one that reveals just how vain capital fashion really is. In what world would low-grade Underarmour protect anyone from either (a) revolting masses of black market-dealing peasants who exist in a persistent state of mental preparation for the day when they will need to kill someone in the games or (b) a Panem winter? Peacekeeper suits were a fashion statement in as mush as Effie’s hair was, although I think we can assume some level of fire protection given Cinne’s history with flames and the too-close-for-comfort roasting of the black markets.

stormtrooper gif, stormtrooper star wars dancing gif

Rebellion and the David principle

Revolution in and of itself is a universally salient storyline that makes for great films, at lease in America where revolution is still relatively new compared to civilization as a whole. Ask dinner party guests about their favorite history subject, and 7 of 10 are likely to say, “The American Revolution.” Revolution is also the central inspiration behind why we root for underdogs. In a space where little is at stake, should the lesser have an opportunity to beat the greater, society often rallies behind the competitor for whom the odds are not in favor. The greatest single line of this franchise communicates that odds can be in favor of those who have no chance to win, giving the marginalized very little reason to fight.

hunger games catching fire, catching fire gif, hunger games gif, hunger games odds in your favor gif, odds are never in your favor, odds are never in your favor gif

Grantland’s NFL writer Bill Barnwell describes how this concept plays out in football: “as a huge underdog, [it] would be smart to pursue ‘David’ strategies, opportunities that involve taking on some risk to increase the slim likelihood of actually winning the game.” Underdogs, already at a disadvantage, have the opportunity to take substantial risks to increase the odds of winning. It is, in fact, both necessary and predetermined that risks must be taken in these situations, pending a colossal melt down of the proverbial “Goliath.”

To successfully execute a Davidic revolution, the underdog has to take risks that buck convention and are unaccounted for by the more conservative, favored party. Katniss’ decision to commit suicide at the end of the first film not only helped her win the game, but exposed a universal weakness in the “fragile” government system. Her actions are carried forward into Catching Fire, where she manages to unconsciously find favor for her cause, not only among the outlying districts, but among the capital residents as well.

And this is where the story truly begins to get good. Catching Fire concludes with Katniss experiencing the full consequences of her actions, though not through punishment. She’s told, in the company of Hoffman’s character, that the revolution is beginning, and has been well underway, brewing quietly beneath the government’s own nose, since before the games even began.

In this way, the film masterfully accomplishes what the book never could because it was written from the perspective of a teenage girl. Rather than tease the reader with a weakly developed, to-be-expected love triangle, the film elevates the essential themes necessary to move the narrative forward just far enough to frustrate the viewer beyond the point where she can logically return to a normal reality. Winning in this space is as much about delivering upon the reality championed by viewers as it is subtly reminding them that the reality is still fiction. The dystopia is exciting to imagine, even embody, but far more difficult to realistically comprehend. America, at least, is likely far from letting post-apocalyptic themes fade as a staple of our entertainment diet. We still have two years of Hunger Games to go and at least as many with Divergent, should it become even half the success. What comes next is most likely already in the works. We have much to look forward to.

Drawbacks?

Coldplay recorded the first track for the credits, and there’s a ridiculous animated Mockingjay pin immediately following the final cut. They were distracting, but I suppose they did the job of making me feel appropriately depressed walking out of the theater.

Why people hit other people: What the research can tell us and why it really doesn’t matter

Standard

via NPR News

Late last week, the Phoenix Coyote’s (NHL) Raffi Torres was served a 25-game suspension for his “blindside hit” on Chicago’s Marian Hossa. The hit and suspension resulted in media critique of the game and its officials for ushering in “an out-for-blood playoffs.”

It is natural, then, that media looking for a good, smart story on human aggression would seek out researcher-scholars who have done work on the subject of hockey violence. Enter the September 2011 article in Social Psychological and Personality Science, “Can Uniform Color Color Aggression?”

It is important that all sports aficionados give thanks for this newly published article because, as its introductory paragraphs tell us, hardly any work has done on the relationship between color and violence since the mid-70s and 80s. And, at least some of that, unsurprisingly, dealt with race or skin color. There has been limited work conducted within the last few decades, however, on jersey color and perceived aggression, and this fact serves as the foundation for the paper’s authors.

The Findings and the Meaning

But, what is the new paper actually saying about violence in hockey, and what does it mean for hockey fans and the athletes? The authors’ main finding is that penalty minutes for teams wearing black jerseys were on average 1.73 minutes higher than teams not wearing black jerseys. To be more precise, this average compares both teams that never work a black jersey (the majority of the sample, to be sure) with teams that did as well as shows how teams that switched between black and non-black jerseys compare with themselves. Regardless of how you interpret their methodology, this averages to less than one penalty more per game. However, when the authors analyzed only the teams that switched between black and non-black jerseys, the effect became less significant.

Other results, including the effect of white jerseys on penalty minutes, make this study a rich bit of research for sport psychologists. But, while the NPR report tries to make the connection between the research and the aforementioned violent NHL playoff season, their attempts are somewhat muted by the fact that, even after reading through the article, we still do not know what to make of the data.

And, this brings into question the overall cultural significance of scientific research. While this may be the prefect moment for a study like this – i.e., the overt violence that’s taking place during the current playoffs – is it necessary? We may now have a passing understanding of what may or may not cause hockey players to be penalized more often, but we still do not know why Torres got himself ejected from the playoffs. And, at some point next season, most hockey fans will forget he was even gone.

Well, maybe not Chicago fans.

How the American and British press view therapeutic cloning

Standard

8/52 : Dolly

Jensen, E. (2012). Scientific sensationalism in American and British press coverage of therapeutic cloning 89(1), 40-54.

As someone who has worked on quantitative analysis of attitudes toward science, I was certainly in the mood to read something at lease a little more qualitative, if only slightly more so. Jensen (2012), who has done a good deal of work on media and the more controversial elements of science, uses both content analysis and limited discourse analysis techniques to in order to figure out what newspapers in both American and the UK are saying about therapeutic cloning.

As the author reminds readers, therapeutic cloning (which involves stem cells) is generally somewhat more accepted among citizens than is reproductive cloning. I think it may have something to do with the name: therapeutic. He also assumes, based on previous work, that Western society has, since second modernity (someone please remind me when first modernity officially began), been classified as a “risk society.” This essentially means that we see risks as extremely uncontrollable or uncertain. Therefore, new research like therapeutic cloning is expected to be viewed in a negative light, at least until it has been proven controllable or beneficial for society as a whole.

Jensen analyzes more than 5100 news articles from elite sources (both newspapers and magazines) as well as some UK tabloids and interviews a segment of journalists who have written on therapeutic cloning in the past. Articles date back to 1997, when Dolly the cloned sheep was all the rage, through 2006.

What he finds is both simplistic and unimpressive. While the elite UK press was heavily in favor of the cloning research, the elite American press was careful to balance both positive views of scientific progress and negatives views of its risks. One article in particular led with the rather depressing metaphor of cloning as an “embryo farm” and concluded with stories of ill individuals begging for the research to continue so that they could be cured.

No one could blame the British tabloids for having a lot of fun with the issue and presenting “a confusing mishmash of pro- and anticloning hype” (p. 50).

All told, those of us in America already knew that we have no idea what we think about science. While focusing on what the elite press has to say about science is valuable, we cannot neglect other media choices (the internet, entertainment, games, apps, etc.) from which people have experiences with the concept of science. Additionally, more research needs to be conducted on the ways in which prominent worldviews, such as religion, pacifism, patriotism, and other ideologies shape views toward science. This is unlikely to be done through quantitative methods, such as surveys, and content analyses of newspapers would yield limited results. Obviously, content analysis is a quantitative research method, but using it as opposed to surveys certainly reveals another piece of the attitudinal puzzle with respect to attitudes toward science in our modern world.

 

Why younger people don’t read the newspaper

Standard
A girl holds The Washington Post of Monday, Ju...

Image via Wikipedia

Three things happened within the last week or so that led me to write this post.

  1. The New York Times website alerted me that I had only 4 articles left to read until I was kicked out behind the pay wall for the rest of the month.
  2. The PIPA and SOPA legislations attracted a lot of attention at the hands of Google, Wikipedia, and others, if only momentary, for internet freedom.
  3. My autumn 2011 copy of Journalism & Mass Communication Quarterly arrived in the mail and includes an article about why young people have stopped reading the newspaper.

The bulk of this post will focus on a review of that article in the spirit of looking at journalism and mass communication through eyes seeking its advancement rather than its demise. For all the scholars out there, this is the article to which I’m referring:

Zerba, A. (2011). Young adults’ reasons behind avoidances of daily print newspapers and their ideas for change. Journalism & Mass Communication Quarterly 88(3), 597-614.

What initially sets this study apart from others is its qualitative nature. Zerba sites several studies that addressed the phenomenon of, as she terms them, “nonuses,” from a uses and gratifications (U & G) approach. Because both U & G research and studies of newspaper use and nonuse have been rather prevalent since the advent of journalism research, the focus group method employed is able to better unpack in detail the nonuse findings of other generations. These include lack of time; availability of other media choices; access; and possible bias, to name a few. As the author states, she aims “to get at the underlying meaning of nonuse” (p. 597).

Possibly of greater importance, focus groups enable the researcher to capture a small slice of one of the primary news-reading audiences highly affected by the current shift from print to digital. And in an age where the internet has become a concept of legal upheaval, a study of print media is extremely relevant.

Zerba puts forth two research questions:

  1. What are the most popular reasons for not using a daily newspaper?
  2. What would your ideal print newspaper include?

Focus groups were formed using research companies in three major cities – Chicago, San Antonio, and Dallas. Sixty four adults between the ages of 18 and 29 were assembled in total (an error in San Antonio resulted in eight groups for evaluation, rather than the anticipated six). The discussions had among the groups revealed results that were both predictable based on previous research and worthy of reflection with respect to today’s somewhat tumultuous media environment.

In sum, adults 29 and under viewed newspapers as

  1. Difficult and inconvenient to access and use
  2. Environmentally unfriendly
  3. Slow to report the news and redundant
  4. Difficult to multitask with
  5. Drag on one’s time
  6. Irrelevant to the age group
  7. Boring
  8. Biased

When given the opportunity to design their own daily print newspaper, respondents decided that the perfect paper would be:

  1. Brief with only the fact
  2. Local in focus
  3. Inclusive of diverse perspectives
  4. Simply formatted and aesthetically pleasing with color, pictures, and a table of contents
  5. Easily accessible
  6. More entertainment content
  7. Topically specialized
  8. With slightly less negative news and summation of leading news items
Front page of the first issue of The New York ...

Image via Wikipedia

In sum, young people want a newspaper that is easily accessible and up-to-date, aesthetically pleasing, and convenient for their multitask, on-the-go lifestyles. In essence, young people want the internet.

In today’s world, to describe why people choose not to use a newspaper, even though they call themselves news aficionados, is to describe why people do use the internet. The real question for practitioners in the news world coming out of this study might not be, “How can we make our print product more accessible to the younger demographic of readers,” but, “How can we make our online product more accessible to everyone?”

Look at the New York Times online. It doesn’t look all that different from the company’s print product. Look at the Washington Post online. Stunning images, fonts, headlines, and masthead all there. Purchase digital subscriptions and customize your own news on your mobile device.

The only way for a print enterprise to save its print product from extinction is to dramatically change the content offered through its online medium. This is why I am in favor of paywalls, especially when a particular newspaper, say a local one, does not have the human resources to produce new electronic content that will add value to its overall product.

I love this article because of its methodology and because of what it says about individuals of my age. I like to live in my bubble and believe that most people are like me. But they are not. And my close-minded perspective is selfish and unjustified. This does not, however, stop me from hoping that a more literate cohort of twentysomethings will rise up and work as hard to “save” journalism as the generations before us did to build it.

Bathroom secrets: Why we wash our hands

Standard

There’s a person standing next to you in the bathroom. This person is washing his or her hands and the chances that you are doing, have done, or are about to do the same are quite high. That is, unless the toilet you’re doing your business in is awkwardly close to the sinks.

In the event that you are both washing your hands, I’d like you to, in your mind, remove this extra person from the situation. In this new hypothetical bathroom, where you are the lone user, are washing your hands? Or, have you already left, tainted hands and all? Do you think that your bathroom mate, sans your presence, would have skipped out on washing?

Some communication researchers have done us all a favor by uncovering the oft imagined, yet seldom discussed, loner-in-the-loo phenomenon. According to their study (they covertly observed some 600 college students in the bathroom), people who use the bathroom with one more person present are more likely to wash their hands than they would while using the bathroom alone. Or, for that matter, but a larger group of people.

They also decided that this effect was more pronounced among women, who were more likely to feel the effects of that extra bathroom buddy and even spent more time washing their hands when someone else was around then men.

The reason for this is almost too simple for research, which could be why the article is painfully short, terribly constructed, and discusses only 10 sources. Apparently, it’s standard practice to act “normally” around other people. But, the freak flag flies free when we’re all alone. The research term for this is social impact theory, which basically states that humans act based on the perceived norms of the particular situation regardless of personal preference because the social impact of not adhering to cultural norms in the presence of others could be devastating.

Of course, if testing social impact theory is the research goal, the researcher must put him or herself in some pretty awkward situations. In this paper, for example, researchers either feigned several non-bathroom tasks for the sake of observing bathroom goers or hid in a stall. This hiding, of course, was the primary method employed to make bathroom users feel they were tinkling alone. Following through with this method, it seems that in order to properly test social impact, you must position yourself as the secretive observer, often in some particularly intimate situations.

And here is the argument for an increased importance of research. If researchers could begin convincing the top government brass that they needed to “research” the many daily doings of influential individuals (and by “research” I mean “spy”), we’d have a whole lot fewer problems to talk about. Think of how the Clinton scandal would have played out if a university professor has been slouched behind the ficus tree, pen and paper in hand. We probably wouldn’t need presidential debates anymore because researchers could just tell us everything the candidates have been doing and spare us the senseless chiding that takes place, removing intelligent discourse from the airwaves.

Influential individuals aren’t the only ones who need observing, as this paper clearly shows. The next challenge to researchers is to uncover that people really do pick their noses, throw recyclables in the garbage, and cough without covering their mouths. Once we have done this, we will finally understand one another and the need for societal norms will disappear. By default, we will no longer have a definition for weird.

Everyone will be creepy, but no one will care.

Reference:

Henningsen, D. D., Henningsen, M. L. M., Braz, L., & Davies, E. (2011). Are we being watched? A test of hand washing in public restrooms as communication behavior. Human Communication 14(1), 31-38.

As a somewhat serious note, I’m slightly concerned that this article has passed for communication scholarship. The researchers follow the path that communication does not necessarily concern word-based, interpersonal, or even purposeful, but goal-based, activity-centric, unintended communication:

In their daily routines, people engage in thousands of communication acts.
Although at times these behaviors may be mindless (Langer, 1989) our communication
behaviors tend to be goal driven (Kellerman, 1992). Indeed, even some behaviors that
may be perceived as mindless or as non-communicative may actually be communication
acts intended to achieve specific goals. We propose that hand washing behavior in public
restrooms represents such an act.

Thus, the researchers’ conclusions – that what is being communicated by washing or not washing our hands is an effort to be perceived as normal – exist as part of any number of readings of culture that could be applied to this situation. Unfortunately, we cannot be certain as to how specific individuals will react to differing situations – this data is not a part of this research. How do we know, for example, that washing/not washing one’s hands is a function of social context rather than an individual’s preconceived notions about hand washing?

Additionally, the sources used to support this article, including the most explanatory about social impact theory, come from social and abnormal psychology research. Those communication resources sited are from the same journal and reflect on theories that are now more than 20 years old.

A better test of social impact theory from a communication perspective might be to research communication among colleagues in a professional setting, especially in times of crisis or when a particularly thorny situation is of concern. See for example the two stories told in this past week’s This American Life. What could be gained, for example, by researchers who follow those individuals who choose to speak/act in ways that are contentious with the known culture of a business?

Then again, I have a sneaking suspicion that people aren’t so much interested in corruption or organizational culture as they are in how other people use the bathroom.

Why science could use more Facebook fans

Standard

Frederic Vandermoere, F., Blanchemanche, S., Bieberstein, A., Marette, S., & Roosen, J. The public understanding of nanotechnology in the food domain: The hidden role of views on science, technology, and nature. Public Understanding of Science 20(2), 195-206.

A disclaimer first: the study that provides the basis for this post was based on a non-probability sample taken in France, so its generalizability to the United States can be questioned.

This recent article in the journal Public Understanding of Science raises many questions, some new and some very old, about nanotechnology as it is applied to food. As the authors discuss, although some largely unknown benefits of nanotechnology abound, the novel application of the technology to food has raised some serious concerns among the public about the technology’s environmental and societal risks. Their study works at this intersection to determine at least some of the factors that cause people to weigh the risks and benefits of this technology application one way or another.

What they find is largely not surprising. First, they determine that knowledge level is not a significant predictor of attitude, thus putting to bed some of the arguments for a knowledge-opinion deficit model where (more knowledge = more positive opinions). Second, the researchers found that trust and attitudes about science, nature, and food were the variables most likely to predict support for nanotech in food.

Both of these findings have been noted in past research that has used the survey as the predominate methodology (see for example Allum et al., 2008 and Bauer, 2005). Allum et al. (2008) provide a stellar reflection on the state of the knowledge-opinion gap research. This article is one of many focusing on the broad topic of science attitudes and science knowledge and, more specifically, on the scientific application of nanotechnology. The step to make the study more specific, and even more so, to focus research on an application with direct societal consequences, is the strength of this study. Yet, the fact that this research is just now becoming popular (the popular General Social Survey finally added nanotechnology measures in 2008) is an unfortunate weakness in the broad science knowledge-attitude research phenomenon among scholars. If this study reveals anything at all, it is that people know almost nothing about nanotechnology (at least more than 80 percent of their sample responded this way). And with this being the case, researchers can’t really do much else aside from surveys. Yes new technologies are a lot of fun to research, and for scientists, it’s very valuable to know just how much of your target audience is lacking science-specific knowledge. But until researchers can step back and embrace more qualitative techniques, survey results probably will not differ much in the coming years.

So, why is any of this important for society at large. First, studies like these continue to cement in place the idea that people do not regularly rely on factual knowledge to help them form opinions. The authors briefly mention heuristic processing, but do not expound further, and their findings indicate that something like trust in the government or in science as an institution are strong predictors of positive attitudes. So, while scientists get angry over the increasingly uninformed public, it might actually be in scientists’ best interests to stop inundating the public with information and start enacting overly hyped-up public relations campaigns.

YouTube anyone?