How and why great brands practice consumer empathy

Standard

image via Fast Company

The crux of the argument that made Simon Sinek famous was that the best brands don’t primarily communicate the what of their business, they communicate the why. Well, today Fast Company broke down the “eight rock-star brands” who absolutely killed it in 2013 and left readers with little suspicion that these brands wouldn’t do the same or better this year.

And why did they excel in 2013? According to the article,

It used to be that a successful brand conveyed authority and reliability (think General Motors or IBM); now it’s all about empathy. Technology used to attract us through specs and features; today it has to enable an experience. Even our perception of what makes a product valuable has shifted, to the point where a brand-new sound system or a dress like the one on the magazine cover is actually less desirable than something with a strong story attached.

To enable a successful brand, those in charge must ensure its customers perceive the brand as one that has the customer’s direct needs in mind. For the consumer, the brand must now answer, Why should I care? We’ll talk about six of the brands profiled in the article, and I suggest you click on over to Fast Co. and read the full article for yourself.

Nest & Uber. The first two brands from Fast Comany’s list didn’t necessarily fix problems. They fixed experiences, and ones that have been consistently broken for some time. Answer this question: What’s enjoyable about setting your thermostat or hailing a cab? Just thinking about doing either probably conjures feelings of doubt (Am I wasting money?) or dread (What will this ride be like?). Neither technology is particularly earth-shattering, and alternative abound in both spaces. But each was designed, not for the purpose of creating a technology that fits a practice, but with the goal of making the users’ experience better.

Birchbox & Quarterly.co seek to take back shopping for the shoppers. Why? Shoppers are tired of manufactured loyalty. Loyalty isn’t something that can be activated by getting a card and is not demonstrated by making a purchase. Loyalty is granted by the customer, and in a marketplace that is soulessly transactional, customers need to know there’s more happening between the retailer and themselves.

Moto X. I wrote about this over at businessngifs, and the phone baby of the Motorola and Google marriage couldn’t be more representative of Sinek’s argument. While Apple and Samsung, to name just two, race to increase the perceived features benefit of their competing smartphones, Moto X actually went the opposite way, keeping their screen resolution to a simple 1280 x 720. While many have faulted this decision, “reviewers have been quick to point out that actually using the smartphone is a genuine pleasure, not because it revs faster, but because its interactions are so thoughtfully designed.” No one talks about screen resolution or processor speed at parties, but they will ask you to take a photo for them and expect you know how to use their phone. “For consumers, these developments suggest that GHz, DPI, and other metrics are increasingly taking a back seat to user experience.”

If you read Fast Company, you know how much they like J. Crew (read their profile of Jenna Lyons, for one). So, it should come as no surprise that the specialty apparel retailer should make this list. When you look at the story of J. Crew, you read in it an intense, even pervasive eye and ear for the consumer: forward-looking fashion that’s (mostly) accessible; customer service that prioritizes style over sale; a creative head (in Lyons) who appears to practice empathy across the board, from employee interaction and into the company’s design.

As cutting-edge as these companies appear to be today, it’s anyone’s guess whether they’ll be around in 10, 20, 50 years. This depends on several variables, none the least of which is the consumer’s sometimes wildly shifting sentiments. And, this means companies like these could be set up for long-term success, as long as they keep practicing what got them here in the first place: Empathy.

Advertisements

In defense of traditional* book publishing, part 1

Standard

A couple of weeks ago, self-publishing guru Guy Kawasaki released on LinkedIn his top ten reasons why authors should self-publish their books. Kawasaki is the coauthor of APE: Author, Publisher, Entrepreneur

The words “traditional publishing” have come to mean different things for different people over the past decade.  The most basic definition of “traditional publishing” is probably found in what it’s not. Traditional publishing is not self-publishing. Traditional publishing requires a publisher, which consists of a team of editorial, production and marketing staff members who project manager a book from its idea through its life as a product – either printed, e, app, all of the above or some other form. Traditional publishing, though not always, typically finds its home in physical books. However, even the most traditional of the traditional (very subjective), a university press, spearheaded the process of creating and publishing an instant volume – in both paper and e – in response to the gun violence debate following Newtown (Reducing Gun Violence in America, The Johns Hopkins University Press).

Surprised?

You might be if you glean your publishing knowledge from Kawasaki. Kawasaki’s ten points are simple and espouse the value added to an author when he or she self-publishes instead of publishing the “traditional” way. While I don’t wholly disagree that self-publishing is a stellar technological advancement for the same reasons Kawasaki cites – tablet adoption is growing, people want connectivity, knowledge needs to be shared – I do think his advice leads the vast majority of potential readers in the wrong direction. And, here’s why.

1. Content and design control. Kawasaki implies that traditional publishing removes the author’s ability to produce the book – in both content and design – that he or she hoped to write. This is the major critique I hear from people who talk about the business of publishing but know little about it. If publishers held all control over content and design, books would never be written and there would be far fewer authors. The sheer time and energy it would take for a publisher to exercise direct editorial and design control over every new book would run publishers out of business and authors to the grave. Yet, authors have not stopped pitching their books – in idea, draft, almost-done and totally complete forms – to publishers. The idea that books passed off to publishers somehow wind up in a dark pit only to emerge as horribly altered versions of their initial selves is an erroneous reality that is a byproduct of the self-centric tech revolution. Relinquishing control of one’s creation is an essential and necessary part of the creative process, and Kawasaki admits that even he must do this at different stages to ensure a good product is published.

2. Time to market. Kawasaki implies that once a book is turned into a publisher, it can take longer than an author would like to have the book released. This is true, since time is a subjective reality, especially during the creative process. As soon as I click the “Publish” button, this blog will be live. Sharing content has never been easier, but books are not blogs. One of the most, if not the only, important variable of marketing a product is timing. In The Tipping PointGladwell argues that the power of context plays a crucial role in determining the “epidemic” adoptability of any one idea or practice. The simple fact that an author has thought about a topic, written a book on that topic and, most importantly, taken the time and invested the energy to publish this book, does not automatically create context for the ideas presented. A measure of value added by publishers is that publishing staff members not only live and breathe the ideas generated by authors, but they often conduct formal and informal research about the cultural salience of the topics. There are cultural reasons behind a publisher’s decision to hold or rush a book’s printing. And, there are editorial reasons as well. Authors are often free to submit drafts of manuscripts that would otherwise, in a self-publishing model, need to be crowd-sourced or peer reviewed at the author’s effort and expense. Copyediting does not equate to content editing. Regardless of the author’s location or experience, they still exist in a rabbit hole. Publishers remove their work from the rabbit hole and work with the author to develop a timeline that will help, not harm, the final product.

3. Longevity. Kawasaki implies two things here. The first is that traditional publishers will let a book go out of print at some point. The second is that publishers stop marketing books after they become financially worthless. This assumes both broad and specific distortions about the nature of publishers. A publisher who adds value to a book and its author will not accept a manuscript that does not fit within the publisher’s essential mission. To do so would affect the brand of the publisher while harming the author’s and the book’s potential marketability. And, the consumer is left to read a crummy book. If missions do not connect, all stakeholders suffer. This means that regardless of a book’s fiscal worth, the publisher will always maintain a stake in the books it publishes. Authors are members of the family. In university presses, this is often even more true as scholars seek the aid of publishers when it comes to professional advancement through tenure or other similar avenues. To place a book out-of-print in the early timeline Kawasaki assumes here should never be part of the publisher’s game plan, especially given access to POD options when it comes to physical copies. Publisher, however, must also be looking forward. An author would never write one book with the assumption that it would be the only book he or she would ever write. In the same way that the author’s next project necessarily shifts attention away from the previous endeavor, without ever fully removing it, publishers too balance their front-, mid- and backlists in a way that balances realistic expectations of each book’s performance. Even from a strictly financial standpoint, releasing backlist books from inventory would harm publishers as those titles continue to make up a majority of the revenue shares.

4. Revisions. Kawasaki is correct. There is very little that can be done for a printed book with errors. Yet, Kawasaki assumes that publishers only conduct traditional print runs while refusing to work with ebook vendors or POD companies. None of these assumptions are true. Additionally, self-publishers and traditional publishers face the same feedback loop issues when releasing a new book. Each still requires others to point out errors. This is true for each media form, from books to blogs and newspapers to television. There are a handful of stylistic and grammatical errors in Kawaski’s blog. Although it’s been on the market for more than three weeks, none of them have been fixed, and they probably never will be. My post likely contains errors that will never be fixed. If the judgment about a book’s worth rests in perfect copy or stylist editing, then readers have missed the point entirely. Traditional publishers and self-publishers stand on the same ground here in seeking a correct product – both have access to correct errors in electronic book versions and must pay money to reprint corrected hard-copy versions.

5. Higher royalty. “Self-publishers can make more money.” I agree with this statement, especially since it purposely separates “royalties” from total net dollars earned during the publishing process. Amazon’s KDP suite, as an example, offers publishers two royalty structures. But, as might be expected, the 70 percent model comes with various stipulations attached. For example, at 70 percent, authors cannot price books below $2.99 or above $9.99. Amazon also charges a delivery fee for each electronic book sold based on the item’s file size. Granted this is a nominal, fixed fee that influences the the royalty rate by only about $0.10 per megabyte of the file size. One source that tracked the ePub file size (Kindle uses a different file type) found the majority of ebooks to be between one and five megabytes. The 70 percent model also limits the royalty structure for sales outside of the United States and requires the book’s price to be at least 20 percent lower than the price of the book’s physical alternative. And, Amazon reserves the right to change your book’s price in order to make it competitive across markets. A traditional publisher who charges $20.00 for a physical book will return about $2.00 to the author, on a 10 percent royalty model. This is in addition to any rights deals brokered and ebook sales a publisher may return to the author. Authors may also receive an advance, traditionally against royalties, that exists independent of sales figures. Depending on a number of factors, a self-published author might make a higher royalty percentage on each copy sold, but it would take a special case to make this assumption generalizable across the book market. Additionally, the cost of publicity is passed off to the publisher in a traditional model. While the author does not get a royalty for the cost of good sold on a gratis book, they are also not responsible for footing the bill of any free copies shipped. Book publishing requires some freebies, and someone needs to pay for them.

The previous point focused solely on comparing a self-published ebook to all items potentially published by a traditional publisher. I recognize this is not a fair comparison. Nor was it in Kawaski’s article. Kawasaki seamlessly weaves at tale that connects self-publishing, Amazon and ebooks into a one-stop publishing solution. For many, this might be the best option. For others, traditional publishing may be better. Kawasaki assumes several things about traditional publishers that are grossly untrue. In the next post, I’ll discuss Kawasaki’s final five points: price control, global distribution, control of foreign rights, analytics and deal flexibility.

The cartoon’s evolution

Standard

20121227-063559.jpg

“It’s a Ziggy!”

One of the best parts of the sitcom Seinfeld is how ably it’s jokes have held up over time. Think of episode 169, “The Cartoon.” Regardless of how many people actually understand all of The New Yorker‘s cartoons, everyone still questions a strip or two in his or her lifetime.

Regardless of one’s ability to understand social critique as humor, the Internet has given us a range of opportunities to express and understand humor. A new story from The Economist looks into the history and development of cartoons, from print media until today. A particularly revealing point in the article directs readers’ attention to the cartoon’s boom during the era of sensationalist journalism:

But it was the combination of the rotary printing press, mass literacy and capitalism which really created the space for comic art to flourish. In Britain Punch coined the term “cartoon” in 1843 to describe its satirical sketches, which soon spread to other newspapers. In the United States, the modern comic strip emerged as a by-product of the New York newspaper wars between Joseph Pulitzer and William Randolph Hearst in the late 19th century. In 1895 Pulitzer’s Sunday World published a cartoon of a bald child with jug ears and buck teeth dressed in a simple yellow shirt: the Yellow Kid. The cartoon gave the name to the new mass media that followed: “yellow journalism”.

Newspapers filled with sensationalist reporting sold millions. They even started wars. But in an era before television and film, it was the cartoons—filled with images of the city and stories of working-class living—which sold the newspapers. With most papers reporting much the same news, cartoons were an easy way for proprietors to differentiate their product. After the success of the Yellow Kid, both Pulitzer and Hearst introduced extensive comic supplements in their Sunday papers. Like the papers that printed them, comics rose and died quickly: the Yellow Kid lasted barely three years. But as the newspaper industry overall grew, so too did the funnies pages. By the mid-1920s one cartoonist, Bud Fisher, was paid $250,000 a year for “Mutt and Jeff”. By 1933, of 2,300 daily American papers, only two, the New York Times and the Boston Transcript, published no cartoons.

The article also describes the fun insertion of the “nerd” into popular cartoon-ery, which is fairly comical in and of itself. Read the full article here.

Facebook thinks it knows me: My review of the Year in Review feature

Standard

bw-minnigan-wedding-2012-315Facebook has changed much about it’s public face in the past year.

It’s mid-December and, in addition to an increased marketing push toward it’s “Gift” feature, the social network has also rolled out a Timeline-enabled Year in Review goodie. Year in Review reports, to you, your own personal top 20 list from 2012 in the best way it knows how – by deciding what posts, likes, friendships, etc., became the most social, sharable or had the widest potential audience.

As with most new features, I was immediately skeptical as to its effectiveness in accurately representing my most significant moments of the year. Social media are largely new in the realm of technological advancements and their ability to paint reliable pictures of the humans behind the avatars is still evolving.

So, according to Facebook, during the past year I

  • wanted to go ice skating (but still haven’t),
  • changed my cover photo,
  • reported on what a Facebook Year in Review list might have looked like if Timeline was around in 2005,
  • had an awesome time jumping in the air and dancing while lying on the floor in Nashville,
  • changed my cover photo again,
  • watched the Olympics,
  • moved to Chicago,
  • won a 10k, and
  • had an over-due Facetime conversation with two of my closest friends.

Sure, there aren’t 20 different events listed here. This is because my friend’s Nashville wedding took up four slots and my move to and affinity for Chicago took two, as did the Pumpkin Festival 10k in Morton, Illinois.

Upon completing a post-hoc analysis, and after calling a close friend who was part of my top 20, I realized that Facebook got it almost exactly right. This simple recognition — that my preconceived notions of what counts as relationship online are becoming more incorrect by the day — required from me an intense investigation of the true meaning of sharing and engaging with content online. What causes me to share a photo, status update, or piece of content? Looking back, I remember posting links I believed those people who actually still get my updates would find interesting, hence the story about ice skating in Chicago.

I don’t use Facebook that often. This is probably why the majority of my “year in review” was posted by other people. It’s also why it took me several months to realize I could sync my Facebook contacts with those already in my iPhone. This, I think, was a momentous occasion. It happened on a night that was mostly more memorable than many of the events listed here. But, according to Facebook, that event probably never happened. And that’s find with me.

Facebook’s Year in Review reminded me of the things I’d done and been a part of this year. Sure they were memorable, but putting numbers to them doesn’t quite square with reality. The value of this feature isn’t so much in giving users a top 20 list they can share with others – most of us could probably craft our own lists anyway – but in reminding us that our year’s were full of other people and that great experiences don’t always have to revolve around us.

Happy New Year.

Tablets and tech innovation

Standard

It’s no surprise: Google may be jumping into the tablet arena. To many, this would seem a late entrance into a realm already saturated with devices, or perhaps one device. According to an NPR Morning Edition report on the subject, Apple sells nearly 15 million iPad devices every three months. Additionally, and this may be even more important than sales, the price charged to Apple to develop its device is somewhat cheap. Finally, while difficult to quantify, brand loyalty to Apple among its customers seems to ensure that those individuals lining up outside retail stores on major product release dates will continue to make the trek in the future.

Apple Inc.  New Headquarters

Apple Inc. New Headquarters (Photo credit: MarkGregory007)

Again, one wonders – is Google’s attempt to tap into some of Apple’s market simply a way to raise its stock prices and take some of the glimmering light of the tech giant? Is the Google tablet doomed to fail?

The answer, I think, comes in the telling line toward the end of NPR’s report. It’s a line many in the book publishing industry have been reciting for years as they struggle through the malaise of what it means to publish and distribute e-books – and make a profit from them.

“Now, building one Google-branded tablet may not change that, but it may be the company’s best shot at creating a device that hits it big.”

One could ruminate on Google’s various devices and services and discuss what represents “hitting it big,” but one fact remains despite past successes or failures: as technology evolves, traditional media companies are often forced to jump into the deep end in the hope that what they do will stick in an ever-evolving market of brand innovation and diffusion as well as stagnation and death.

However, this assumption – that companies must be moving forward with the changing times in order to survive, even if they don’t fully understand what that means – doesn’t necessarily means that companies must fully abandon traditional models of doing business. If book publishers, for example, as abandoned print-centric workflows at onset of this supposed e-revoution, the market would have seem more publishing fatalities than it already has.

Businesses must toe the line between a full-on innovation-based model and one that shuns innovation completely. Employees know their business better than big tech companies, and book publishers shouldn’t feel Apple or Google (or Amazon) bearing down on them as if their way were the only way forward.

If the innovation and diffusion theory of adoption has taught us anything, it’s that many innovations simply won’t make it past the initial adoption stage. With the tech market still existing in a sort of jungle, it’s up to the companies affected by big-tech’s decisions to pave novel trails through the thicket and recommend the most navigable course for their constituents.

Why people hit other people: What the research can tell us and why it really doesn’t matter

Standard

via NPR News

Late last week, the Phoenix Coyote’s (NHL) Raffi Torres was served a 25-game suspension for his “blindside hit” on Chicago’s Marian Hossa. The hit and suspension resulted in media critique of the game and its officials for ushering in “an out-for-blood playoffs.”

It is natural, then, that media looking for a good, smart story on human aggression would seek out researcher-scholars who have done work on the subject of hockey violence. Enter the September 2011 article in Social Psychological and Personality Science, “Can Uniform Color Color Aggression?”

It is important that all sports aficionados give thanks for this newly published article because, as its introductory paragraphs tell us, hardly any work has done on the relationship between color and violence since the mid-70s and 80s. And, at least some of that, unsurprisingly, dealt with race or skin color. There has been limited work conducted within the last few decades, however, on jersey color and perceived aggression, and this fact serves as the foundation for the paper’s authors.

The Findings and the Meaning

But, what is the new paper actually saying about violence in hockey, and what does it mean for hockey fans and the athletes? The authors’ main finding is that penalty minutes for teams wearing black jerseys were on average 1.73 minutes higher than teams not wearing black jerseys. To be more precise, this average compares both teams that never work a black jersey (the majority of the sample, to be sure) with teams that did as well as shows how teams that switched between black and non-black jerseys compare with themselves. Regardless of how you interpret their methodology, this averages to less than one penalty more per game. However, when the authors analyzed only the teams that switched between black and non-black jerseys, the effect became less significant.

Other results, including the effect of white jerseys on penalty minutes, make this study a rich bit of research for sport psychologists. But, while the NPR report tries to make the connection between the research and the aforementioned violent NHL playoff season, their attempts are somewhat muted by the fact that, even after reading through the article, we still do not know what to make of the data.

And, this brings into question the overall cultural significance of scientific research. While this may be the prefect moment for a study like this – i.e., the overt violence that’s taking place during the current playoffs – is it necessary? We may now have a passing understanding of what may or may not cause hockey players to be penalized more often, but we still do not know why Torres got himself ejected from the playoffs. And, at some point next season, most hockey fans will forget he was even gone.

Well, maybe not Chicago fans.

How the American and British press view therapeutic cloning

Standard

8/52 : Dolly

Jensen, E. (2012). Scientific sensationalism in American and British press coverage of therapeutic cloning 89(1), 40-54.

As someone who has worked on quantitative analysis of attitudes toward science, I was certainly in the mood to read something at lease a little more qualitative, if only slightly more so. Jensen (2012), who has done a good deal of work on media and the more controversial elements of science, uses both content analysis and limited discourse analysis techniques to in order to figure out what newspapers in both American and the UK are saying about therapeutic cloning.

As the author reminds readers, therapeutic cloning (which involves stem cells) is generally somewhat more accepted among citizens than is reproductive cloning. I think it may have something to do with the name: therapeutic. He also assumes, based on previous work, that Western society has, since second modernity (someone please remind me when first modernity officially began), been classified as a “risk society.” This essentially means that we see risks as extremely uncontrollable or uncertain. Therefore, new research like therapeutic cloning is expected to be viewed in a negative light, at least until it has been proven controllable or beneficial for society as a whole.

Jensen analyzes more than 5100 news articles from elite sources (both newspapers and magazines) as well as some UK tabloids and interviews a segment of journalists who have written on therapeutic cloning in the past. Articles date back to 1997, when Dolly the cloned sheep was all the rage, through 2006.

What he finds is both simplistic and unimpressive. While the elite UK press was heavily in favor of the cloning research, the elite American press was careful to balance both positive views of scientific progress and negatives views of its risks. One article in particular led with the rather depressing metaphor of cloning as an “embryo farm” and concluded with stories of ill individuals begging for the research to continue so that they could be cured.

No one could blame the British tabloids for having a lot of fun with the issue and presenting “a confusing mishmash of pro- and anticloning hype” (p. 50).

All told, those of us in America already knew that we have no idea what we think about science. While focusing on what the elite press has to say about science is valuable, we cannot neglect other media choices (the internet, entertainment, games, apps, etc.) from which people have experiences with the concept of science. Additionally, more research needs to be conducted on the ways in which prominent worldviews, such as religion, pacifism, patriotism, and other ideologies shape views toward science. This is unlikely to be done through quantitative methods, such as surveys, and content analyses of newspapers would yield limited results. Obviously, content analysis is a quantitative research method, but using it as opposed to surveys certainly reveals another piece of the attitudinal puzzle with respect to attitudes toward science in our modern world.