A story of failure

Sticky
Embed from Getty Images

Before I begin, I wanted to give a quick shout out to Getty Images, who has released a HUGE storehouse of images (including the one above) for public use via a direct embed code. Read this Nieman story for more. 

In his 2012 TEDGlobal talk “The Self-Organizing Computer Course,” Shimon Schocken says,

I’d like to say a few words about traditional college grading. I’m sick of it. We are obsessed with grades because we are obsessed with data, and yet grading takes away all the fun from failing, and a huge part of education is about failing. Courage, according to Churchill, is the ability to go from one defeat to another without losing enthusiasm. And [Joyce] said that mistakes are the portals of discovery. And yet we don’t tolerate mistakes, and we worship grades. So we collect your B pluses and your A minuses and we aggregate them into a number like 3.4, which is stamped on your forehead and sums up who you are. Well, in my opinion, we went too far with this nonsense, and grading became degrading. (Emphasis added).

Much has been said about failure during the past decade, which has been largely marked by an invigorated launch of innovation through design thinking. Read this guest post for Forbes written by Edward Hess of the Darden Graduate School of Business for more.

Debates about student loans, critique over the efficacy of the traditional MBA and the increased availability of free education and continued learning opportunities have helped support Schocken’s critique of the state of education as it applies to professional life.

I graduated college with a 3.93 (of 4.0) and completed my graduate work perfectly, according to the GPA standard. Prior to college, I was financially incentivized to make good grades: straight A’s earned me $100 per report card, or roughly $16 per week. Getting good grades has provided a primary way for me to earn rapid validation of my strengths, but has done very little to help me overcome my weaknesses. Anyone who has consistently received high marks or positive professional reviews has likely been frustrated with the entire corporate review process. How, we think, am I supposed to grow without receiving consistent critique of my capabilities?

Failure is a popular concept today because world-class innovators and entrepreneurs have gotten to their position today through both successes and failures. For me, failure, above and beyond success, has served to motive me to grow professionally in ways I likely never would have without it.

chris bosh fail

My most memorable failure came during my freshman year in college. An aspiring biology student in pursuit of pre-med validation with a perfect score on the calculus AP exam, I entered college by enrolling in calculus. In that first week, I recognized how woefully under-prepared I was for the course. And, on the third exam, I scored a remarkable 56 percent, marking the worst score I’d ever received on any assignment ever, much less on an exam.

That score placed me on a fence I hadn’t experienced before. If I didn’t perform spectacularly on my final exam, the likelihood of failing the entire course was very high. The worst part about this reality was that the third exam was largely representative of my growth throughout the course. Math builds upon itself, and I knew immediately that I would have a very, very difficult time turning things around by the final exam simply because I”d failed to establish any noticeable foundation.

By the time of the final, I was overwhelmed with doubt about my own learning and study habits. I questioned my ability grow in knowledge and internalize the concepts necessary to pass the exam and, subsequently, the course. I knew others in the dorm who pulled all-nighters in advance of difficult exams, and I began the evening before my test considering doing the same. By 10 p.m., I was wiped out. I was having trouble completing practice problems and felt, to a large degree, that if I couldn’t get it now, the additional hours awake would, at best, deliver diminishing returns. And, since I’d never stayed up overnight before, I decided the best course was to sleep and ensure what I did know could be recalled the next morning.

On the day of the exam, it took me exactly two hours – the maximum allowable time – to finish. Since the majority of the class took the full time as well, my confidence remained low considering I perceived everyone in that class to have a greater mastery of the content.

In the day’s following, I managed to emotionally convince myself that re-taking calculus was a good thing. And then, the grades dropped. I’d received a C+ in the course. It was my worst grade ever, and although I hadn’t received the result of my exam, it didn’t take a mathematician to calculate that a C+ for the course required at least a B on the exam.

Today, I’ve still never pulled an all-nighter in advance of a major project, though I admit to some late nights. I’ve learned that my mind most clearly functions under conditions of high stress, but only if it is fresh. I’ve also learned that my ability to execute in these high stress moments is a combination of consistent preparation and internal confidence in my own abilities.

To me, failure is only failure if you let it be so. People have the capability to learn from every experience. But, exponential growth happens when one learns through failure. We’re not called to seek out opportunities to fail, but to consider the ways in which we are challenging ourselves beyond our comfort zones day after day while making tangible connections between our diverse historical experiences and the road ahead.

How and why great brands practice consumer empathy

Sticky

image via Fast Company

The crux of the argument that made Simon Sinek famous was that the best brands don’t primarily communicate the what of their business, they communicate the why. Well, today Fast Company broke down the “eight rock-star brands” who absolutely killed it in 2013 and left readers with little suspicion that these brands wouldn’t do the same or better this year.

And why did they excel in 2013? According to the article,

It used to be that a successful brand conveyed authority and reliability (think General Motors or IBM); now it’s all about empathy. Technology used to attract us through specs and features; today it has to enable an experience. Even our perception of what makes a product valuable has shifted, to the point where a brand-new sound system or a dress like the one on the magazine cover is actually less desirable than something with a strong story attached.

To enable a successful brand, those in charge must ensure its customers perceive the brand as one that has the customer’s direct needs in mind. For the consumer, the brand must now answer, Why should I care? We’ll talk about six of the brands profiled in the article, and I suggest you click on over to Fast Co. and read the full article for yourself.

Nest & Uber. The first two brands from Fast Comany’s list didn’t necessarily fix problems. They fixed experiences, and ones that have been consistently broken for some time. Answer this question: What’s enjoyable about setting your thermostat or hailing a cab? Just thinking about doing either probably conjures feelings of doubt (Am I wasting money?) or dread (What will this ride be like?). Neither technology is particularly earth-shattering, and alternative abound in both spaces. But each was designed, not for the purpose of creating a technology that fits a practice, but with the goal of making the users’ experience better.

Birchbox & Quarterly.co seek to take back shopping for the shoppers. Why? Shoppers are tired of manufactured loyalty. Loyalty isn’t something that can be activated by getting a card and is not demonstrated by making a purchase. Loyalty is granted by the customer, and in a marketplace that is soulessly transactional, customers need to know there’s more happening between the retailer and themselves.

Moto X. I wrote about this over at businessngifs, and the phone baby of the Motorola and Google marriage couldn’t be more representative of Sinek’s argument. While Apple and Samsung, to name just two, race to increase the perceived features benefit of their competing smartphones, Moto X actually went the opposite way, keeping their screen resolution to a simple 1280 x 720. While many have faulted this decision, “reviewers have been quick to point out that actually using the smartphone is a genuine pleasure, not because it revs faster, but because its interactions are so thoughtfully designed.” No one talks about screen resolution or processor speed at parties, but they will ask you to take a photo for them and expect you know how to use their phone. “For consumers, these developments suggest that GHz, DPI, and other metrics are increasingly taking a back seat to user experience.”

If you read Fast Company, you know how much they like J. Crew (read their profile of Jenna Lyons, for one). So, it should come as no surprise that the specialty apparel retailer should make this list. When you look at the story of J. Crew, you read in it an intense, even pervasive eye and ear for the consumer: forward-looking fashion that’s (mostly) accessible; customer service that prioritizes style over sale; a creative head (in Lyons) who appears to practice empathy across the board, from employee interaction and into the company’s design.

As cutting-edge as these companies appear to be today, it’s anyone’s guess whether they’ll be around in 10, 20, 50 years. This depends on several variables, none the least of which is the consumer’s sometimes wildly shifting sentiments. And, this means companies like these could be set up for long-term success, as long as they keep practicing what got them here in the first place: Empathy.

14 Ways to Be Awesome in 2014

ted cruz, ted cruz green eggs and ham, ted cruz green eggs and ham gif
Sticky

Do you remember everything that happened in 2013? Neither do I, but a lot did happen. In the spirit of helping you set real-life goals for 2014, let’s all embrace a model that hopes to learn from the past and press on toward tomorrow.

And, let’s do it in .gif form.

Continue reading

Today’s news roundup – 12-13-13

Standard

From December 13, 2013, here are the top 5 things I remember reading today.

Grandland: Wesley Morris tells us about why he doesn’t really like American Hustle or The Hobbit

Image

image via the JFK Library YouTube channel, where Morris and Conan can be found talking about comedy


Nieman Journalism Lab:
Ampp3d is a really amazing online destination

James Bond chartimage via ampp3d


Gawker
insinuates Uber could make us all fat and lazy human beings

BbPRvGmCcAEbXh2

image via TechCrunch, which has the story about Uber trying to woo Lyft ridesharers


Fast Company:
It’s 5 Free App Friday

image via Fast Company


Buzzfeed
reminds us of the real reason why you don’t want to piss off reporters

Today’s tech roundup – 12-12-13

Standard

Huffington Post: Yahoo mail users contemplate switching to Gmail

undeliverable_address

image via allthingsd


Mashable:
Instagram announces Instagram Direct, acknowledges journalists’ affinity for physical things by sending out a bunch of photos printed on canvas

image via Adweek, who asks the tough questions about pervs and spammers


Adweek:
Google jumps the gun, decides for an entire industry that ads that don’t get viewed are basically worthless

image via Google’s blog, which reminded me that Blogger is still a thing
Quartz: 5 reasons why Netflix is better than everyone else

image via Quartz


Buzzfeed:
Citigroup releases 30+ page report (that you have to read) about why digital video is going to take over the world

I’m thinking this could become a semi-regular segment. If you have feedback, I’d love to hear it. Any opinions expressed from the headlines I’ve written are mine and clearly don’t represent any point-of-view expressed by the linked publishers.

If there are such things as wizards, this must be the apocalypse

hunger games catching fire, catching fire gif, hunger games gif, hunger games odds in your favor gif, odds are never in your favor, odds are never in your favor gif
Standard

When Harry Potter and the Sorcerer’s Stone hit movie theaters in 2001, it was abundantly clear that more was to come. Nevermind that aficionados of the book series would only be satiated by a film series covering the whole of J.K. Rowling’s wizardry corpus. The simple fact that the first movie sucked, relative to just about every other blockbuster hit of ever, left much to be desired. It also paved a long runway for multiple directors to taxi up their aircraft, shove the throttle forward and throw the viewers so far to the backs of their seats that passing out would be preferable to the increasing pain being felt along the journey.

This is nothing against Chris Columbus, who directed and executive produced both Sorcerer’s Stone and Chamber of Secrets, and produced Prisoner of Azkaban with director Alfonso Cuarón. Had Columbus not been director and ep of the first film in the Potter franchise, the entire trajectory might have been all sorts of different. That’s because, to win the job, Columbus re-wrote the entire script, which Columbus admits himself was already brilliant.

If the beginning of the Potter saga established a lengthy runway for one series, it arguably forced multiple offshoots for those writers, directors and producers hoping to capitalize on the emerging young-adult fiction boom. The first such manifestation came through the Twilight series, though its dark and strangely animalistic, sexual niche was off-putting to a swath of the popular audience. If IMDB ratings mean anything, the “sucky” 7.3 earned by Sorcerer’s Stone vastly outpaces the 2008 debut of Twilight at 5.2. Sorcerer’s also raked in some $20 million more during its US opening weekend in 2001 than Twilight, even though both made it to the movies in time for Thanksgiving and Twilight’s revenues were ultimately inflated. By the 2012 conclusion of Potter, it had amassed a fan base willing to shell out nearly $170 million to see the show during that first summer weekend.

This weekend, Catching Fire, the adaptation of the second book in Susanne Collins’ Hunger Games trilogy, earned $161,250,000 making it the fourth most successful premiere ever, according to one reporter at the Wall Street Journal, and the second best of the year. The Hunger Games not only exists as the most smoothly paved extant runway to Potter port, but also the most obviously well placed. Believe it or not, this is shaping up to be a functional airport.

The difference between fiction and non-fiction is in what each medium excels at providing readers when their stories have concluded. For avid non-fiction readers, especially those of the scholarly variety, there are clear paths for engagement beyond the content. One can construct paths of further inquiry, whether due to lingering questions or gaps in the existing study or a provocation or inspiring hypothesis prompted by a particularly interesting train of thought. This can lead readers to write their own response or simply seek other points of view from competing voices. In essence, the point of non-fiction is to inspire further investigation and work.

With fiction, however, the motivation to move beyond a logical end is less prompted by rationality than it is emotion. There is no inquiry to be made because no additional material exists. Professional reviews of the content can never satisfy because they remind us that there is something outside the world created via the fiction, namely our world and its need to respond to any work in the same way we might for non-fiction. What readers needed when the Harry Potter books concluded wasn’t a forum to talk about Potter, but a way to keep living in his world. The film put into color all the imagination that had been instilled in devoted readers for years. As much as one can live within a fictional world, Harry Potter readers did. Regardless of the film’s effectiveness of capturing an accurate representation of the collective mindset, the films would still have served their cultural purpose – extending the relationship between the reader-viewer and the external reality.

Hunger Games is a poorly written trilogy with numerous plot holes. It has a horrible ending. From a literary point of view, there is no need to read Hunger Games. In fact, had Hunger Games come before Potter, I’d venture that it wouldn’t have moved the popular needle to one-tenth the magnitude it has. The one major plot element Potter had in its corner was a clear and relatable trajectory. Everyone, well a lot of people, goes to school. The entire concept of school is built on the framework of personal progression toward an ultimate end. By embedding the entire story within the reasonable bounds of the education system, Rowling took advantage of a collective conscience that needed little help imagining what life might be like if school were somehow different than it actually is. And like other fifth graders, Potter and his pipsqueak first year comrades needed time to develop from children into adults. This is why so many fans were happy to give the first film – if not the first three – a pass. The chief hurdle in adoption was the acting. But at that point who cares? Everyone knows another school year is just around the corner. Maybe, it’ll be just a bit better than the previous one.

By contrast, Hunger Games hit the market with fully formed characters. Peeta and Katniss had grown up and somehow survived into adolescence, but not without their own scars. In the debut film, viewers are introduced to one of the first encounters between the two – one in which Katniss is apparently homeless and Peeta has enough burnt bread to toss into the pig pen in which she’s sought temporary refuge. There are two faults with this that any post-apocalyptic novelist and producer should have considered. First, Katniss has a home. We already know this. So, why is she slogging through the mud outside Peeta’s bakery shack? Anyone with enough sense of self-preservation would understand that rain, mud and pigs breed disease. Therefore, swimming with them is not in your best interest. Second, in what post-apocalyptic world would any family forgo bread simply because it has been burnt? And, an even more base principle to be observed, what kind of post-apocalyptic baker burns the loaves of bread he’s been raised to bake?

Audiences are supposed to believe that, despite their flaws, Collins’ protagonists can and do emerge victorious from the world’s most dangerous game. That Harry Potter and his friends had been doing this since they were eleven years old is the only reason Hunger Games is at all believable. What the Potter saga did for its audience was to create imaginative minds convinced that fantasy is possible and victory is achievable. Screw football. I’m going back to college to play quidditch.

a gif making fun of college age quidditch tournaments

Speaking of both football and quidditch: Greg Gumbel everyone.

gumblequidditch

If you thought for a second that Harry Potter isn’t the cultural phenomenon it’s cracked up to be, explain to me why America’s favorite CBS football TV personality showed up on the Early Show to report on quidditch for muggles. Is this really what the entire sports world looks like without football?

Reality aside, Potter convinced a global audience that it’s okay to feel real emotion for characters, worlds and problems that don’t even exist. So, when the whole thing came crashing down as the credits rolled on Deathly Hallows, Part 2 last year, an entire segment of America was reminded that their belief in victory above all odds was based in young-adult fiction and that humans can’t really become wizards, even though the most sporting among us can quite obviously try.

Now, no one really believes that the human race as it exists today could ever evolve to include wizards, witches and an entire educational system devoted to their magical advancement. While the racist undertones pointed out by several academic analysts do reflect historical reality to a point – and even predict alongside modern post-apocalyptic authors like Veronica Roth a future in which social factions determine the pecking order – there is only so much magic a generation can believe in before it needs a fantastical world rooted in reality. And so, at the height of fantasy realism championed by all things Potter, enter Suzanne Collins and Hunger Games.

A world in dire need of a replacement for real life finally had its next story. One that didn’t begin the journey with elementary school children, but continued the narrative where it left off, setting of-age, average people in a reality that hearkens back to dystopian novels of old and bases itself in a plausible reality. Who cares that the books are terrible? The entire thing makes sense simply because it exists and makes sense for those who needed it to.

Like Potter, Hunger Games did have some stage-setting to do before audiences could fully buy in. Sorcerer’s helped create a reality in which multi-film series are required to accomplish this up front to make room for what’s to come later. Overcoming the growing pains of transitioning young-adult literature into a dark film for a mass audience is a necessary hurdle, but one that was overcome for Hunger Games by the fantastical framework through which the audience already approached the film.

After the awkward millennial love boat had been established and audiences had gotten over the fact that film moved entirely too fast by creating an incredible bad lip read of the whole thing, stuff quickly fell into place.

And, before we all knew it, last weekend arrived and Catching Fire was making so much money that theaters were opened at ungodly hours of the morning just to funnel through people (like me) who like to watch movies alone on opening weekend, at like 9 am. Hunger Games is a hit. Unquestionably, but remarkably so given that Jennifer Lawrence has taken roles in other films, something the Potter characters could never have conceived of and still struggle to overcome today. Before we draw this comparison to a close and dive into just why Catching Fire is one of the year’s best films, it’s worth mentioning that the finale of this saga will be split – like Potter – into two separate films. This, I think, could become a norm among films of the genre, meaning popular book-based dramas. Regardless of which Potter film you watched, there was always the sense that the films moved too quickly, especially after you’d watched how elegantly the seventh book was split in two, though even that left gaps. Critiques of this nature have nothing to do with neglecting to account for specific portions of their related books. It all comes back to what the audiences feels and needs from these films. The appetite will always be for more. Fleshing out every possible detail gives those viewers reason to believe in the worlds they inhabit through fiction while helping studios pad their pockets.

So, why was Catching Fire spectacular?

Jennifer Lawrence was in it

JLaw, as a couple of my favorite Grantland contributors call her, makes this movie. There’s a reason her face fills the first and final frames of this film. The choice to bookend with one of the generation’s most emotionally savvy and versatile actors was production gold. In the very first scene, we can assume Katniss has hit rock bottom in her struggle with games-induced PTSD. This assumption is incorrect, and director Francis Lawrence was challenged to create a film in which the weight of her depression is abundantly felt while giving the audience little reason to believe it will improve. Rock bottom is still a way off.

Likewise, Lawrence (Jennifer) was challenged as an actor to embody and even represent one emotional state, while displaying another. At the deepest levels, Katniss is two characters. First, and likely most neglected by the audience, she is a teenager who, through an act of familial preservation and self-sacrifice, was required to kill 22 people, most of them near her age. She did not enjoy it, and one could argue that killing herself was less an act of direct defiance than it was an effort to obtain psychological release from the realization that the reality of her world was far more hellish than she’d been raised to believe.

Coincidentally, depression plays a starring role for Lawrence’s character in Silver Linings Playbook, too.

The depressed teen wants nothing to do with the games or anyone affiliated with them, hence her almost automatic agreement with President Snow (played by Donald Sutherland) when he arrives at her home to threaten her and her family should she fail to convince Panem of the lie from one year earlier. The teenage Katniss wants nothing to do with revolution or rebellion, partly because she has no framework with which to interpret or understand that pending revolution was her doing in the first place.

The second Katniss – the viewer-friendly version – slowly internalizes the collective mindset adopted by the districts, feared by the government and stupidly neglected by the capital populace. This comes with help from Snow, her sister, the district killings during the tour, Gale’s beating and reaches a tipping point when Plutarch (played by Philip Seymour Hoffman, more to come) is discovered to be a double agent working within the government on behalf of the rebellion. Throughout the film, her fear and depression are supplanted, though never fully replaced, by an apathy whose end can only be realized when the government is placed in a position to fear its citizens in the way its citizens have feared the capital.

catching-fire-katniss-prim-lead

Lawrence was perfectly positioned to master such a complex character and convince audiences that it mattered that she, not someone else, played Katniss. There is little doubt that Divergent could fail to match the gravity communicated by Lawrence’s Katniss, if for no other reason than the person playing the part of Tris.

Oh, and so was Philip Seymour Hoffman

When they’ve seen a smart ass, most people know it. Do you remember the trailer for Mission Impossible III? It doesn’t matter that Philip Seymour Hoffman played Truman Capote in a film about the man or that his filmography includes some of the most decorated work of the last 20 years, not the least of these being The Big Lebowski. I will always remember Hoffman for using his cool, smart ass attitude to piss off and freak out Ethan Hunt. I was convinced that he could get away with whatever plan he’s concocted.

When I finally got around to seeing Money Ball, I found Hoffman once again playing the only character he could possible play – a belligerent general manager who showed little regard for anything anyone said – and I knew he had, in fact, gotten away with killing Hunt, torturing his girlfriend and ending the entire Mission Impossible saga.

You could analyze Catching Fire and accurately dispute this claim, but Hoffman appears to smile throughout much of the film’s duration. His coy and convincing demeanor do more than comfort Snow into believing he has an evil ally, they almost ensure the uneducated viewer is none the wiser to his true status as a double agent. With Hoffman around, the pending rebellion may actually have a shot.

The costume designer had enough sense to put a crotch pad over those peacekeeper uniforms

hunger games peacekeeper photo

Much has been made of Catching Fire fashion, with no shortage of commentary on Lawrence’s attire. And this makes perfect sense for a film where the plot relies upon the fact that the best fashion designer in the most outrageously fashionable city ever uses Lawrence’s character as his muse. But, for at least 10 minutes after swaths of faceless, capital peacekeepers descended on the districts, my imagination was immediately clouded by the thought that Lionsgate had run out of money and all constumes heretofore would be made of the kind of one-size-fits-all Spandex that leaves as little to the imagination as Nacho Libre’s sweats.

nacho libre gif, nacho libre jack black gif, nacho libre running gif, jack black running gif

Thankfully, and following the path of better films before it, the mass-produced peace keeper suits did include a handy crotch pad, though one that reveals just how vain capital fashion really is. In what world would low-grade Underarmour protect anyone from either (a) revolting masses of black market-dealing peasants who exist in a persistent state of mental preparation for the day when they will need to kill someone in the games or (b) a Panem winter? Peacekeeper suits were a fashion statement in as mush as Effie’s hair was, although I think we can assume some level of fire protection given Cinne’s history with flames and the too-close-for-comfort roasting of the black markets.

stormtrooper gif, stormtrooper star wars dancing gif

Rebellion and the David principle

Revolution in and of itself is a universally salient storyline that makes for great films, at lease in America where revolution is still relatively new compared to civilization as a whole. Ask dinner party guests about their favorite history subject, and 7 of 10 are likely to say, “The American Revolution.” Revolution is also the central inspiration behind why we root for underdogs. In a space where little is at stake, should the lesser have an opportunity to beat the greater, society often rallies behind the competitor for whom the odds are not in favor. The greatest single line of this franchise communicates that odds can be in favor of those who have no chance to win, giving the marginalized very little reason to fight.

hunger games catching fire, catching fire gif, hunger games gif, hunger games odds in your favor gif, odds are never in your favor, odds are never in your favor gif

Grantland’s NFL writer Bill Barnwell describes how this concept plays out in football: “as a huge underdog, [it] would be smart to pursue ‘David’ strategies, opportunities that involve taking on some risk to increase the slim likelihood of actually winning the game.” Underdogs, already at a disadvantage, have the opportunity to take substantial risks to increase the odds of winning. It is, in fact, both necessary and predetermined that risks must be taken in these situations, pending a colossal melt down of the proverbial “Goliath.”

To successfully execute a Davidic revolution, the underdog has to take risks that buck convention and are unaccounted for by the more conservative, favored party. Katniss’ decision to commit suicide at the end of the first film not only helped her win the game, but exposed a universal weakness in the “fragile” government system. Her actions are carried forward into Catching Fire, where she manages to unconsciously find favor for her cause, not only among the outlying districts, but among the capital residents as well.

And this is where the story truly begins to get good. Catching Fire concludes with Katniss experiencing the full consequences of her actions, though not through punishment. She’s told, in the company of Hoffman’s character, that the revolution is beginning, and has been well underway, brewing quietly beneath the government’s own nose, since before the games even began.

In this way, the film masterfully accomplishes what the book never could because it was written from the perspective of a teenage girl. Rather than tease the reader with a weakly developed, to-be-expected love triangle, the film elevates the essential themes necessary to move the narrative forward just far enough to frustrate the viewer beyond the point where she can logically return to a normal reality. Winning in this space is as much about delivering upon the reality championed by viewers as it is subtly reminding them that the reality is still fiction. The dystopia is exciting to imagine, even embody, but far more difficult to realistically comprehend. America, at least, is likely far from letting post-apocalyptic themes fade as a staple of our entertainment diet. We still have two years of Hunger Games to go and at least as many with Divergent, should it become even half the success. What comes next is most likely already in the works. We have much to look forward to.

Drawbacks?

Coldplay recorded the first track for the credits, and there’s a ridiculous animated Mockingjay pin immediately following the final cut. They were distracting, but I suppose they did the job of making me feel appropriately depressed walking out of the theater.

In defense of traditional* book publishing, part 1

Standard

A couple of weeks ago, self-publishing guru Guy Kawasaki released on LinkedIn his top ten reasons why authors should self-publish their books. Kawasaki is the coauthor of APE: Author, Publisher, Entrepreneur

The words “traditional publishing” have come to mean different things for different people over the past decade.  The most basic definition of “traditional publishing” is probably found in what it’s not. Traditional publishing is not self-publishing. Traditional publishing requires a publisher, which consists of a team of editorial, production and marketing staff members who project manager a book from its idea through its life as a product – either printed, e, app, all of the above or some other form. Traditional publishing, though not always, typically finds its home in physical books. However, even the most traditional of the traditional (very subjective), a university press, spearheaded the process of creating and publishing an instant volume – in both paper and e – in response to the gun violence debate following Newtown (Reducing Gun Violence in America, The Johns Hopkins University Press).

Surprised?

You might be if you glean your publishing knowledge from Kawasaki. Kawasaki’s ten points are simple and espouse the value added to an author when he or she self-publishes instead of publishing the “traditional” way. While I don’t wholly disagree that self-publishing is a stellar technological advancement for the same reasons Kawasaki cites – tablet adoption is growing, people want connectivity, knowledge needs to be shared – I do think his advice leads the vast majority of potential readers in the wrong direction. And, here’s why.

1. Content and design control. Kawasaki implies that traditional publishing removes the author’s ability to produce the book – in both content and design – that he or she hoped to write. This is the major critique I hear from people who talk about the business of publishing but know little about it. If publishers held all control over content and design, books would never be written and there would be far fewer authors. The sheer time and energy it would take for a publisher to exercise direct editorial and design control over every new book would run publishers out of business and authors to the grave. Yet, authors have not stopped pitching their books – in idea, draft, almost-done and totally complete forms – to publishers. The idea that books passed off to publishers somehow wind up in a dark pit only to emerge as horribly altered versions of their initial selves is an erroneous reality that is a byproduct of the self-centric tech revolution. Relinquishing control of one’s creation is an essential and necessary part of the creative process, and Kawasaki admits that even he must do this at different stages to ensure a good product is published.

2. Time to market. Kawasaki implies that once a book is turned into a publisher, it can take longer than an author would like to have the book released. This is true, since time is a subjective reality, especially during the creative process. As soon as I click the “Publish” button, this blog will be live. Sharing content has never been easier, but books are not blogs. One of the most, if not the only, important variable of marketing a product is timing. In The Tipping PointGladwell argues that the power of context plays a crucial role in determining the “epidemic” adoptability of any one idea or practice. The simple fact that an author has thought about a topic, written a book on that topic and, most importantly, taken the time and invested the energy to publish this book, does not automatically create context for the ideas presented. A measure of value added by publishers is that publishing staff members not only live and breathe the ideas generated by authors, but they often conduct formal and informal research about the cultural salience of the topics. There are cultural reasons behind a publisher’s decision to hold or rush a book’s printing. And, there are editorial reasons as well. Authors are often free to submit drafts of manuscripts that would otherwise, in a self-publishing model, need to be crowd-sourced or peer reviewed at the author’s effort and expense. Copyediting does not equate to content editing. Regardless of the author’s location or experience, they still exist in a rabbit hole. Publishers remove their work from the rabbit hole and work with the author to develop a timeline that will help, not harm, the final product.

3. Longevity. Kawasaki implies two things here. The first is that traditional publishers will let a book go out of print at some point. The second is that publishers stop marketing books after they become financially worthless. This assumes both broad and specific distortions about the nature of publishers. A publisher who adds value to a book and its author will not accept a manuscript that does not fit within the publisher’s essential mission. To do so would affect the brand of the publisher while harming the author’s and the book’s potential marketability. And, the consumer is left to read a crummy book. If missions do not connect, all stakeholders suffer. This means that regardless of a book’s fiscal worth, the publisher will always maintain a stake in the books it publishes. Authors are members of the family. In university presses, this is often even more true as scholars seek the aid of publishers when it comes to professional advancement through tenure or other similar avenues. To place a book out-of-print in the early timeline Kawasaki assumes here should never be part of the publisher’s game plan, especially given access to POD options when it comes to physical copies. Publisher, however, must also be looking forward. An author would never write one book with the assumption that it would be the only book he or she would ever write. In the same way that the author’s next project necessarily shifts attention away from the previous endeavor, without ever fully removing it, publishers too balance their front-, mid- and backlists in a way that balances realistic expectations of each book’s performance. Even from a strictly financial standpoint, releasing backlist books from inventory would harm publishers as those titles continue to make up a majority of the revenue shares.

4. Revisions. Kawasaki is correct. There is very little that can be done for a printed book with errors. Yet, Kawasaki assumes that publishers only conduct traditional print runs while refusing to work with ebook vendors or POD companies. None of these assumptions are true. Additionally, self-publishers and traditional publishers face the same feedback loop issues when releasing a new book. Each still requires others to point out errors. This is true for each media form, from books to blogs and newspapers to television. There are a handful of stylistic and grammatical errors in Kawaski’s blog. Although it’s been on the market for more than three weeks, none of them have been fixed, and they probably never will be. My post likely contains errors that will never be fixed. If the judgment about a book’s worth rests in perfect copy or stylist editing, then readers have missed the point entirely. Traditional publishers and self-publishers stand on the same ground here in seeking a correct product – both have access to correct errors in electronic book versions and must pay money to reprint corrected hard-copy versions.

5. Higher royalty. “Self-publishers can make more money.” I agree with this statement, especially since it purposely separates “royalties” from total net dollars earned during the publishing process. Amazon’s KDP suite, as an example, offers publishers two royalty structures. But, as might be expected, the 70 percent model comes with various stipulations attached. For example, at 70 percent, authors cannot price books below $2.99 or above $9.99. Amazon also charges a delivery fee for each electronic book sold based on the item’s file size. Granted this is a nominal, fixed fee that influences the the royalty rate by only about $0.10 per megabyte of the file size. One source that tracked the ePub file size (Kindle uses a different file type) found the majority of ebooks to be between one and five megabytes. The 70 percent model also limits the royalty structure for sales outside of the United States and requires the book’s price to be at least 20 percent lower than the price of the book’s physical alternative. And, Amazon reserves the right to change your book’s price in order to make it competitive across markets. A traditional publisher who charges $20.00 for a physical book will return about $2.00 to the author, on a 10 percent royalty model. This is in addition to any rights deals brokered and ebook sales a publisher may return to the author. Authors may also receive an advance, traditionally against royalties, that exists independent of sales figures. Depending on a number of factors, a self-published author might make a higher royalty percentage on each copy sold, but it would take a special case to make this assumption generalizable across the book market. Additionally, the cost of publicity is passed off to the publisher in a traditional model. While the author does not get a royalty for the cost of good sold on a gratis book, they are also not responsible for footing the bill of any free copies shipped. Book publishing requires some freebies, and someone needs to pay for them.

The previous point focused solely on comparing a self-published ebook to all items potentially published by a traditional publisher. I recognize this is not a fair comparison. Nor was it in Kawaski’s article. Kawasaki seamlessly weaves at tale that connects self-publishing, Amazon and ebooks into a one-stop publishing solution. For many, this might be the best option. For others, traditional publishing may be better. Kawasaki assumes several things about traditional publishers that are grossly untrue. In the next post, I’ll discuss Kawasaki’s final five points: price control, global distribution, control of foreign rights, analytics and deal flexibility.

The cartoon’s evolution

Standard

20121227-063559.jpg

“It’s a Ziggy!”

One of the best parts of the sitcom Seinfeld is how ably it’s jokes have held up over time. Think of episode 169, “The Cartoon.” Regardless of how many people actually understand all of The New Yorker‘s cartoons, everyone still questions a strip or two in his or her lifetime.

Regardless of one’s ability to understand social critique as humor, the Internet has given us a range of opportunities to express and understand humor. A new story from The Economist looks into the history and development of cartoons, from print media until today. A particularly revealing point in the article directs readers’ attention to the cartoon’s boom during the era of sensationalist journalism:

But it was the combination of the rotary printing press, mass literacy and capitalism which really created the space for comic art to flourish. In Britain Punch coined the term “cartoon” in 1843 to describe its satirical sketches, which soon spread to other newspapers. In the United States, the modern comic strip emerged as a by-product of the New York newspaper wars between Joseph Pulitzer and William Randolph Hearst in the late 19th century. In 1895 Pulitzer’s Sunday World published a cartoon of a bald child with jug ears and buck teeth dressed in a simple yellow shirt: the Yellow Kid. The cartoon gave the name to the new mass media that followed: “yellow journalism”.

Newspapers filled with sensationalist reporting sold millions. They even started wars. But in an era before television and film, it was the cartoons—filled with images of the city and stories of working-class living—which sold the newspapers. With most papers reporting much the same news, cartoons were an easy way for proprietors to differentiate their product. After the success of the Yellow Kid, both Pulitzer and Hearst introduced extensive comic supplements in their Sunday papers. Like the papers that printed them, comics rose and died quickly: the Yellow Kid lasted barely three years. But as the newspaper industry overall grew, so too did the funnies pages. By the mid-1920s one cartoonist, Bud Fisher, was paid $250,000 a year for “Mutt and Jeff”. By 1933, of 2,300 daily American papers, only two, the New York Times and the Boston Transcript, published no cartoons.

The article also describes the fun insertion of the “nerd” into popular cartoon-ery, which is fairly comical in and of itself. Read the full article here.

Facebook thinks it knows me: My review of the Year in Review feature

Standard

bw-minnigan-wedding-2012-315Facebook has changed much about it’s public face in the past year.

It’s mid-December and, in addition to an increased marketing push toward it’s “Gift” feature, the social network has also rolled out a Timeline-enabled Year in Review goodie. Year in Review reports, to you, your own personal top 20 list from 2012 in the best way it knows how – by deciding what posts, likes, friendships, etc., became the most social, sharable or had the widest potential audience.

As with most new features, I was immediately skeptical as to its effectiveness in accurately representing my most significant moments of the year. Social media are largely new in the realm of technological advancements and their ability to paint reliable pictures of the humans behind the avatars is still evolving.

So, according to Facebook, during the past year I

  • wanted to go ice skating (but still haven’t),
  • changed my cover photo,
  • reported on what a Facebook Year in Review list might have looked like if Timeline was around in 2005,
  • had an awesome time jumping in the air and dancing while lying on the floor in Nashville,
  • changed my cover photo again,
  • watched the Olympics,
  • moved to Chicago,
  • won a 10k, and
  • had an over-due Facetime conversation with two of my closest friends.

Sure, there aren’t 20 different events listed here. This is because my friend’s Nashville wedding took up four slots and my move to and affinity for Chicago took two, as did the Pumpkin Festival 10k in Morton, Illinois.

Upon completing a post-hoc analysis, and after calling a close friend who was part of my top 20, I realized that Facebook got it almost exactly right. This simple recognition — that my preconceived notions of what counts as relationship online are becoming more incorrect by the day — required from me an intense investigation of the true meaning of sharing and engaging with content online. What causes me to share a photo, status update, or piece of content? Looking back, I remember posting links I believed those people who actually still get my updates would find interesting, hence the story about ice skating in Chicago.

I don’t use Facebook that often. This is probably why the majority of my “year in review” was posted by other people. It’s also why it took me several months to realize I could sync my Facebook contacts with those already in my iPhone. This, I think, was a momentous occasion. It happened on a night that was mostly more memorable than many of the events listed here. But, according to Facebook, that event probably never happened. And that’s find with me.

Facebook’s Year in Review reminded me of the things I’d done and been a part of this year. Sure they were memorable, but putting numbers to them doesn’t quite square with reality. The value of this feature isn’t so much in giving users a top 20 list they can share with others – most of us could probably craft our own lists anyway – but in reminding us that our year’s were full of other people and that great experiences don’t always have to revolve around us.

Happy New Year.

Leslie Knope on NPR and other local media

Standard

One of the things I love most about NBC’s Parks and Recreation is its spoofs of local news media. If you’ve seen the show, there’s not much else I can say to expound on the antics of Perd Hapley, Joan Callamezzo and Derry Murbles. For listeners of NPR, Derry Murbles’s character, the local NPR host, is absolutely priceless.

So, here are a few clips of some of my favorite spoofs.