Posts Tagged ‘business models’

Monetization is key to protecting Internet freedom

May 21st, 2011

The long-term freedom of the Internet may depend, in part, on convincing the big players of the content industry to modernize their business models.

Motivated by “protecting” the content industry, the U.S. Congress is discussing proposed legislation that could be used to seize domain names and force websites (even search engines) to remove links.

Congress doesn’t yet understand that there are already safe and effective ways to counter piracy — which don’t threaten Internet freedom. “Piracy happens not because it is cheaper, but because it is more convenient,” as Arvind Narayanan reports, musing on a conversation with Congresswoman Lofgren.

What the Congresswoman was saying was this:

  1. The only way to convince Washington to drop this issue for good is to show that artists and musicians can get paid on the Internet.
  2. Currently they are not seeing any evidence of this. The Congresswoman believes that new technology needs to be developed to let artists get paid. I believe she is entirely wrong about this; see below.
  3. The arguments that have been raised by tech companies and civil liberties groups in Washington all center around free speech; there is nothing wrong with that but it is not a viable strategy in the long run because the issue is going to keep coming back.

Arvind’s response is that the technology needed is already here. That’s old news to technologists, but the technology sector needs to educate Congress, who may not have the time and skills to get this information by themselves.

The dinosaurs of the content industries need to adapt their business models. Piracy is not correlated with a decrease in sales. Piracy happens not because it is cheaper, but because it is more convenient. Businesses need to compete with piracy rather than trying to outlaw it. Artists who’ve understood this are already thriving.

Tags: , , , ,
Posted in future of publishing, information ecosystem, intellectual freedom | Comments (0)

QOTD: Stop crippling ebooks: invent new business models instead

May 16th, 2011

Holding on to old business models is not the way to endear yourself to customers.

But unfortunately this is also, simultaneously, a bad time to be a reader. Because the dinosaurs still don’t get it. Ten years of object lessons from the music industry, and they still don’t get it. We have learned, painfully, that media consumers—be they listeners, watchers, or readers—want one of two things:

  • DRM-free works for a reasonable price
  • or, unlimited single-payment subscription to streaming/DRMed works

Give them either of those things, and they’ll happily pay. Look at iTunes. Look at Netflix. But give them neither, and they’ll pirate. So what are publishers doing?

  • Refusing to sell DRM-free books. My debut novel will be re-e-published by the Friday Project imprint of HarperCollins UK later this year; both its editor and I would like it to be published without DRM; and yet I doubt we will be able to make that happen.
  • crippling library e-books
  • and not offering anything even remotely like a subscription service.

– Jon Evans, When Dinosaurs Ruled the Books, via James Bridle’s Stop Press

Eric Hellman is one of the pioneers of tomorrow’s ebook business models: his company, Gluejar, uses a crowdfunding model to re-release books under Creative Commons licenses. Authors and publishers are paid; fans pay for the books they’re most interested in; and everyone can read and distribute the resulting “unglued” ebooks. Everybody wins.

Tags: , , , , ,
Posted in books and reading, future of publishing, information ecosystem | Comments (0)

Apple seizes control of iOS purchase chain: enforces 30% cut for Apple by prohibiting sales-oriented links from apps to the Web

February 16th, 2011

Apple’s press release about its “new subscription services” seems at first innocuous, and the well-crafted quote1 from Steve Jobs has been widely reposted:
“when Apple brings a new subscriber to the app, Apple earns a 30 percent share; when the publisher brings an existing or new subscriber to the app, the publisher keeps 100 percent and Apple earns nothing.” Yet analysts reading between the lines have been less than pleased.

Bad for publishers

The problems for publishers? (See also “Steve Jobs to pubs: Our way or highway“)

  • Apple takes a 30% cut of all in-app purchases2
  • Apps may not bypass in-app purchase: apps may not link to an external website (such as Amazon)3 that allows customers to buy content or subscriptions.
  • Content available for purchase in the app cannot be cheaper elsewhere.
  • The customer’s demographic information resides with Apple, not with the publisher. Customers must opt-in to share their name, email, and zipcode with the publisher, though Apple will of course have this information.
  • Limited reaction time; changes will be finalized by June 30th.

Bad for customers?

And there are problems for customers, too.

  • Reduction of content available in apps (likely for the near-term).
  • More complex, clunky purchase workflows (possible).
    Publishers may sell material only outside of apps, from their own website, to avoid paying 30% to Apple. Will we see a proliferation of publisher-run stores?
  • Price increases to cover Apple’s commission (likely).
    If enacted, these must apply to all customers, not just iOS device users.
  • Increased lockdown of content in the future (probably).
    Apple already prevents some iBooks customers from reading books they bought and paid, using extra DRM affecting some jailbroken devices. Even though jailbreaking is explicitly legal in the United States. And even though carrier unlock and SIM-free phones are not available in the U.S.

More HTML5 apps?

The upside? Device-independent HTML5 apps may see wider adoption. HTML5 mobile apps work well on iOS, on other mobile platforms, and on laptops and desktops.

For ebooks, HTML5 means Ibis Reader and Book.ish. For publishers looking to break free of Apple, yet satisfy customers, Ibis Reader may be a particularly good choice: this year they are focusing on licensing Ibis Reader, as Liza Daly’s Threepress announced in a savvy and well-timed post, anticipating Apple’s announcement. Having been a beta tester of Ibis Reader, I can recommend it!

If you know of other HTML5 ebook apps, please leave them in the comments.

  1. “Our philosophy is simple—when Apple brings a new subscriber to the app, Apple earns a 30 percent share; when the publisher brings an existing or new subscriber to the app, the publisher keeps 100 percent and Apple earns nothing,” said Steve Jobs, Apple’s CEO. “All we require is that, if a publisher is making a subscription offer outside of the app, the same (or better) offer be made inside the app, so that customers can easily subscribe with one-click right in the app. We believe that this innovative subscription service will provide publishers with a brand new opportunity to expand digital access to their content onto the iPad, iPod touch and iPhone, delighting both new and existing subscribers.”

    – Steve Jobs at “Apple Launches Subscriptions on the App Store“ []

  2. Booksellers call this “the agency model“. []
  3. Apple has confirmed that Kindle’s “Shop in Kindle Store” must be removed. []

Tags: , , , , , , , , , , , , , , ,
Posted in books and reading, future of publishing, information ecosystem, iOS: iPad, iPhone, etc. | Comments (0)

Let’s link the world’s metadata!

December 9th, 2010

Together we can continue building a global metadata infrastructure. I am tasking you with helping. How can you do that?

For evangelists, practitioners, and consultants:

  • Thanks for bringing Linked Data to where it is today! We’re counting on you for even more yummy cross-disciplinary Linked Data!
  • What tools and applications are most urgently needed? Researchers and developers need to hear your use cases: please partner with them to share these needs!
  • How do you and your clients choose [terms, concepts, schemas, ontologies]? What helps the most?
  • Overall, what is working (and what is not)? How can we amplify what *is* working?

For Semantic Web researchers:

  • Build out the trust and provenance infrastructure.
  • Mature the query languages (e.g. SPARQL) [perhaps someone could say more about what this would mean?]
  • Building tools and applications for end-users is really important: value this work, and get to know some real usecases and end-users!

For information scientists:

  • How can we identify ‘universals’ across languages, disciplines, and cultures? Does the Colon classification help?
  • What are the best practices for sharing and reusing [terms, concepts, schemas, ontologies]? What is working and what is failing with metadata registries? What are the alternatives?

For managers, project leaders, and business people:

  • How do we create and justify the business case for Terminology services [like MIME types, library subject headings, New York Times Topics]?
  • Please collect and share your usage data! Do we need infrastructure for sharing usage data?
  • Share the economic and business successes of Linked Data!

That ends the call to action, but here’s where it comes from.

Yesterday Stuart Weibel gave a talk called “Missing Pieces in the Global Metadata Landscape” [slideshare] at InfoCom International Symposium in Tokyo. Stu asked 11 of us what those missing pieces were—with 3 questions: the conceptual issues, organizational impediments, and the most important overall issue. This last question, “What is the most important missing infrastructural link in establishing globally interoperable metadata systems?”, is my favorite, so I’ll talk about it a little further.

Stu summarizes that the infrastructure is mostly there, but that broad adoption (of standards, conventions, and common practice) is key. Overall these are the key issues he reports:

  • Tools to support and encourage the reuse of terms, concepts, schemas, ontologies (e.g., metadata registries, and more)
  • Widespread, cross-disciplinary adoption of a common metadata approach (Linked Data)
  • Query languages for the open web (SPARQL) are not fully mature
  • Trust and provenance infrastructure
  • Nothing’s missing… just use RDF, Linked Data, and the open web.  The key is broad adoption, and that requires better tools and applications. It’s a social problem, not a technical problem.
  • The ability to identify ‘universals’ across languages, disciplines, and cultures – revive Ranganathan’s facets?
  • Terminology services [like MIME types, library subject headings, New York Times Topics] have long been proposed as important services, but they are expensive to create, curate, and manage, and the economic models are weak
  • Stuff that does not work is often obvious. We need usage data to see what does work, and amplify it

You may notice, now, that the “call” looks a little familiar!

Tags: , , ,
Posted in information ecosystem, library and information science, semantic web | Comments (0)

Ebook pricing fail (Springer edition)

November 1st, 2010

I wrote Springer to ask about buying an ebook that’s not in our university subscriptions. They sell the print copy at €62.95, but the electronic copy comes to €425, chapter by chapter.

Publishers: this is short-sighted (not to mention frustrating)–especially when your customers are looking for a portable copy of a book they already owns!

———- Forwarded message ———-
From: Springerlink, Support, Springer DE
Date: Fri, Oct 29, 2010 at 8:46 PM
Subject: WG: ebook pricing

Dear Jodi,

Thank you for your message.

On SpringerLink you can purchase online single journal articles and book chapters, but no complete ebooks.
eBooks are sold by Springer in topical eBook packages only.

with kind regards,
SpringerLink Support Team
eProduct Management & Innovation | SpringerLink Operations | + 49 (06221) 4878 743

—–Original Message—–
From: Jodi Schneider
Sent: Thursday, October 28, 2010 5:09 PM
To: MetaPress Support
Subject: ebook pricing


I’m interested in buying a copy of [redacted] as an ebook:[redacted]

This book has 17 chapters, which seem to be priced at 25 EUR each = 425 EUR.

But I could buy a print version, new at for 62.95 EUR:[redacted]

Can you help me get the ebook at this price?

Tags: , , , ,
Posted in books and reading, future of publishing | Comments (3)

Funding Models for Books

July 17th, 2010

Paying for books per copy “developed in response to the invention of the printing press”, and a Readercon panel discussed some alternatives.

Existing alternatives, as noted in Cecilia Tan’s summary of the panel:

  • the donation model
  • the Kickstarter model
  • the “ransom” model
  • the subscription or membership model
  • the “perks” model
  • the merchandising model
  • the collectibles model
  • the company or support grant model
  • the voting model
  • the hits/pageviews model

Any synergies with Kevin Kelly’s Better than Free?

via HTLit’s Readercon overview

Tags: , ,
Posted in books and reading, future of publishing | Comments (0)

How metadata could pay for newspapers

February 13th, 2010

What if newspapers published not just stories but databases? Dan Conover’s vision for the future of newspapers is inspired in part by his first reporting job, for NATO:

When we spotted something interesting, we recorded it in a highly structured way that could be accurately and quickly communicated over a two-way radio, to be transcribed by specialists at our border camp and relayed to intelligence analysts in Brussells.

The story, says Conover, is only one aspect of reporting. The other part? Gathering structured metadata, which could be stored in a database—or expressed as linked data.1

Newspapers already have classification systems and professional taxonomists. The New York Times’ classifications system, in use since 1851, now aggregates stories from the archives in Times Topics, a website and API.2

What if, in addition to these classifications, each story had even more structured metadata?
Capturing metadata ranges from automatic to manual. Some automatic capture is already standard (timestamps) or could be (saving GPS coordinates from a photo or storing timestamps), and some information needing manual capture (like the number of alarms of a fire) is already reported.

Dan compares the “old way” with his “new way”:

The old way:

Dan the reporter covers a house fire in 2005. He gives the street address, the date and time, who was victimized, who put it out, how extensive the fire was and what investigators think might have caused it. He files the story, sits with an editor as it’s reviewed, then goes home. Later, he takes a phone call from another editor. This editor wants to know the value of the property damaged in the fire, but nobody has done that estimate yet, so the editor adds a statement to that effect. The story is published and stored in an electronic archive, where it is searchable by keyword.

The new way:

Dan the reporter covers a house fire in 2010. In addition to a street address, he records a six-digit grid coordinate that isn’t intended for publication. His word-processing program captures the date and time he writes in his story and converts it to a Zulu time signature, which is also appended to the file.

As he records the names of the victimized and the departments involved in putting out the fire, he highlights each first reference for computer comparison. If the proper name he highlights has never been mentioned by the organization, Dan’s newswriting word processor prompts him to compare the subject to a list of near-matches and either associate the name with an existing digital file or approve the creation of a new one.

When Dan codes the story subject as “fire,” his word processor gives him a new series of fields to complete. How many alarms? Official cause? Forest fire (y/n)? Official damage estimate? Addresses of other properties damaged by the fire? And so on. Every answer he can’t provide is coded “Pending.”

Later, Dan sits with an editor as his story is reviewed, but a second editor decides not to call him at home because he sees the answer to the damage-estimate question in the file’s metadata. The story is published and archived electronically, along with extensive metadata that now exists in a relational database. New information (the name of victims, for instance) automatically generates new files, which are retained by the news organization’s database but not published.

And those information fields Dan coded as “Pending?” Dan and his editors will be prompted to provide that structured information later — and the prompting will continue until the data set is completed.

– Dan Conover in The “Lack of Vision” thing? Well, here’s a hopeful vision for you

And that data set? It might even be saleable, even though each individual story had perhaps been given away for free. Dan highlights some possibilities, and entire industries have grown around repackaging free and non-free data (e.g. U.S. Census data, phone book data). I think of mashups such as Everyblock and hyperlocal news sites like

  1. Some news organizations, like the New York Times (see Linked Open Data) and the BBC (overview, tech blog) are already embracing linked data. []
  2. I delved into Times Topics’ taxonomy and vocabulary in an earlier post. []

Tags: , , , , , , ,
Posted in future of publishing, information ecosystem, semantic web | Comments (1)

Google Books settlement: a monopoly waiting to happen

October 10th, 2009

Will Google Books create a monopoly? Some1 people think2 so. Brin claims it won’t:

If Google Books is successful, others will follow. And they will have an easier path: this agreement creates a books rights registry that will encourage rights holders to come forward and will provide a convenient way for other projects to obtain permissions.

-Sergey Brin, New York Times, A Library To Last Forever

Brin is wrong: the proposed Google Books settlement will not smooth the way for other digitization projects. It creates a red carpet for Google while leaving everyone else at risk of copyright infringement.

The safe harbor provisions apply only to Google. Anyone else who wants to use one of these books would face the draconian penalties of statutory copyright infringement if it turned out the book was actually still copyrighted. Even with all this effort, one will not be able to say with certainty that a book is in the public domain. To do that would require a legislative change – and not a negotiated settlement.

– Peter Hirtle, LibraryLawBlog: The Google Book Settlement and the Public Domain.

Monopoly is not the only risk. Others include3 reader privacy, access to culture, suitability for bulk and some research users (metadata, etc.). Too bad Brin isn’t acknowledging that!

Don’t know what all the fuss is with Google Books and the proposed settlement? Wired has a good outline from April.

  1. “Several European nations, including France and Germany, have expressed concern that the proposed settlement gives Google a monopoly in content. Since the settlement was the result of a class action against Google, it applies only to Google. Other companies would not be free to digitise books under the same terms.” (bolding mine) – Nigel Kendall, Times (UK) Online, Google Book Search: why it matters []
  2. “Google’s five-year head start and its relationships with libraries and publishers give it an effective monopoly: No competitor will be able to come after it on the same scale. Nor is technology going to lower the cost of entry. Scanning will always be an expensive, labor-intensive project.” (bolding mine) – Geoffrey Nunberg, Chronicle of Higher Education, Google’s Book Search: A Disaster for Scholars (pardon the paywall) []
  3. Of course there are lots of benefits, too! []

Tags: , , , ,
Posted in books and reading, future of publishing, information ecosystem, intellectual freedom, library and information science | Comments (1)

Newspapers in an Age of Revolution (aka The Internet as an Agent of Change)

March 15th, 2009

Clay Shirky writes of newspapers in an age of revolution: 15 years of anticipated problems* viewed optimistically, patched with one-size-fits-all solutions. Those solutions don’t attack the main issue: “the core problem publishing solves — the incredible difficulty, complexity, and expense of making something available to the public — has stopped being a problem.” It’s a revolution, he says, drawing on the print revolution of the early 1400s, and no one knows what will happen.

The old stuff gets broken faster than the new stuff is put in its place. The importance of any given experiment isn’t apparent at the moment it appears; big changes stall, small changes spread. Even the revolutionaries can’t predict what will happen. Agreements on all sides that core institutions must be protected are rendered meaningless by the very people doing the agreeing. (Luther and the Church both insisted, for years, that whatever else happened, no one was talking about a schism.) Ancient social bargains, once disrupted, can neither be mended nor quickly replaced, since any such bargain takes decades to solidify.

And so it is today. When someone demands to know how we are going to replace newspapers, they are really demanding to be told that we are not living through a revolution. They are demanding to be told that old systems won’t break before new systems are in place. They are demanding to be told that ancient social bargains aren’t in peril, that core institutions will be spared, that new methods of spreading information will improve previous practice rather than upending it. They are demanding to be lied to.

There are fewer and fewer people who can convincingly tell such a lie.

Shirky sees the future of journalism as “overlapping special cases” with a variety of funding and business models. It’s a time for experimentation, and while he sees failure and risk, he has hope, too:

Many of these models will fail. No one experiment is going to replace what we are now losing with the demise of news on paper, but over time, the collection of new experiments that do work might give us the reporting we need.

Society needs reporting, not newspapers. That need is real, and worth restating:

Society doesn’t need newspapers. What we need is journalism. For a century, the imperatives to strengthen journalism and to strengthen newspapers have been so tightly wound as to be indistinguishable. That’s been a fine accident to have, but when that accident stops, as it is stopping before our eyes, we’re going to need lots of other ways to strengthen journalism instead.

When we shift our attention from ’save newspapers’ to ’save society’, the imperative changes from ‘preserve the current institutions’ to ‘do whatever works.’ And what works today isn’t the same as what used to work.

Go read the whole essay, then let it stew with other thoughts on the future of publishing.

*Circa 1993: “When a 14 year old kid can blow up your business in his spare time, not because he hates you but because he loves you, then you got a problem.”

Via John Dupuis’ post in Confessions of a Science Librarian.

Tags: , , , , ,
Posted in future of publishing | Comments (0)

The News Ecosystem

March 14th, 2009

Yesterday, Steven Berlin Johnson spoke at SXSW about the information ecosystem and the future of news. Fortunately, for those of us playing at home, he blogged a transcript.

Johnson adds international and war reporting to investigative reporting as the areas at risk due to the implosion of news funding. Johnson envisions a bright future in other areas, citing a well-developed information ecosystem in technology, and comparing coverage of the 2008 and 1992 U.S. Presidential elections.

Extending his ecosystem metaphor, Johnson introduces technology journalism as the “old-growth forest” of web journalism. Ecologists use (real-world) old growth “to research natural ecosystems”, so by extension, Johnson says, “it’s much more instructive to anticipate the future of investigative journalism by looking at the past of technology journalism”. While this argument holds no water, it’s certainly suggestive.

in the long run, we’re going to look back at many facets of old media and realize that we were living in a desert disguised as a rain forest. … most of what we care about in our local experience lives in the long tail. We’ve never thought of it as a failing of the newspaper that its metro section didn’t report on a deli closing, because it wasn’t even conceivable that a big centralized paper could cover an event with such a small radius of interest.

But of course, that’s what the web can do. … As we get better at organizing all that content – both by selecting the best of it, and by sorting it geographically – our standards about what constitutes good local coverage are going to improve.

As Johnson envisions, “Five years from now, if someone gets mugged within a half mile of my house, and I don’t get an email alert about it within three hours, it will be a sign that something is broken.”.
This is all by way of introduction to his new company,, which provides geographic search and alerting.

Johnson concludes, in part, by examining the filtering problem, and turning it into an opportunity:

Now there’s one objection to this ecosystems view of news that I take very seriously. It is far more complicated to navigate this new world than it is to sit down with your morning paper. There are vastly more options to choose from, and of course, there’s more noise now. For every Ars Technica there are a dozen lame rumor sites that just make things up with no accountability whatsoever. I’m confident that I get far more useful information from the new ecosystem than I did from traditional media along fifteen years ago, but I pride myself on being a very savvy information navigator. Can we expect the general public to navigate the new ecosystem with the same skill and discretion?

Johnson expects (future) newspapers to function as filters, aiding the public in getting the news:

Information Ecosystem, as envisioned by Steven Berlin Johnson

Information Ecosystem, as envisioned by Steven Berlin Johnson

Johnson does not address who’s going to pay for the filtering. He’s ready for a new model, but leaves that to the industry to discover for itself. “Measured by pure audience interest, newspapers have never been more relevant.” When he acknowledges the short-term pain of the newspaper industry today, he worries:

we’re going to spend so much time trying to figure out how to keep the old model on life support that we won’t be able to help invent a new model that actually might work better for everyone. The old growth forest won’t just magically grow on its own, of course, and no doubt there will be false starts and complications along the way.

The entire transcript is well worth a read.

Via Steven Johnson on twitter.

Tags: , , , ,
Posted in future of publishing, information ecosystem | Comments (1)