Apple seizes control of iOS purchase chain: enforces 30% cut for Apple by prohibiting sales-oriented links from apps to the Web

February 16th, 2011
by jodi

Apple’s press release about its “new subscription services” seems at first innocuous, and the well-crafted quote ((

“Our philosophy is simple—when Apple brings a new subscriber to the app, Apple earns a 30 percent share; when the publisher brings an existing or new subscriber to the app, the publisher keeps 100 percent and Apple earns nothing,” said Steve Jobs, Apple’s CEO. “All we require is that, if a publisher is making a subscription offer outside of the app, the same (or better) offer be made inside the app, so that customers can easily subscribe with one-click right in the app. We believe that this innovative subscription service will provide publishers with a brand new opportunity to expand digital access to their content onto the iPad, iPod touch and iPhone, delighting both new and existing subscribers.”

– Steve Jobs at “Apple Launches Subscriptions on the App Store“)) from Steve Jobs has been widely reposted:
“when Apple brings a new subscriber to the app, Apple earns a 30 percent share; when the publisher brings an existing or new subscriber to the app, the publisher keeps 100 percent and Apple earns nothing.” Yet analysts reading between the lines have been less than pleased.

Bad for publishers

The problems for publishers? (See also “Steve Jobs to pubs: Our way or highway“)

  • Apple takes a 30% cut of all in-app purchases ((Booksellers call this “the agency model“.))
  • Apps may not bypass in-app purchase: apps may not link to an external website (such as Amazon) ((Apple has confirmed that Kindle’s “Shop in Kindle Store” must be removed.)) that allows customers to buy content or subscriptions.
  • Content available for purchase in the app cannot be cheaper elsewhere.
  • The customer’s demographic information resides with Apple, not with the publisher. Customers must opt-in to share their name, email, and zipcode with the publisher, though Apple will of course have this information.
  • Limited reaction time; changes will be finalized by June 30th.

Bad for customers?

And there are problems for customers, too.

  • Reduction of content available in apps (likely for the near-term).
  • More complex, clunky purchase workflows (possible).
    Publishers may sell material only outside of apps, from their own website, to avoid paying 30% to Apple. Will we see a proliferation of publisher-run stores?
  • Price increases to cover Apple’s commission (likely).
    If enacted, these must apply to all customers, not just iOS device users.
  • Increased lockdown of content in the future (probably).
    Apple already prevents some iBooks customers from reading books they bought and paid, using extra DRM affecting some jailbroken devices. Even though jailbreaking is explicitly legal in the United States. And even though carrier unlock and SIM-free phones are not available in the U.S.

More HTML5 apps?

The upside? Device-independent HTML5 apps may see wider adoption. HTML5 mobile apps work well on iOS, on other mobile platforms, and on laptops and desktops.

For ebooks, HTML5 means Ibis Reader and Book.ish. For publishers looking to break free of Apple, yet satisfy customers, Ibis Reader may be a particularly good choice: this year they are focusing on licensing Ibis Reader, as Liza Daly’s Threepress announced in a savvy and well-timed post, anticipating Apple’s announcement. Having been a beta tester of Ibis Reader, I can recommend it!

If you know of other HTML5 ebook apps, please leave them in the comments.

Tags: , , , , , , , , , , , , , , ,
Posted in books and reading, future of publishing, information ecosystem, iOS: iPad, iPhone, etc. | Comments (0)

Supporting Reading

January 21st, 2011
by jodi

Yesterday I spoke at Beyond the PDF about use cases for reading. Slides are below; the presentation was also webcast, so I hope to share a video recording when it becomes available. The video is now on Youtube (part of the Beyond the PDF video playlist) and below.

Thanks to the DERI Social Software Unit for feedback on an earlier version of this presentation. I’m particularly grateful to Allen Renear and Carole Palmer from UIUC, whose call for ontology-aware reading tools pushed me down this path, and to Geoffrey Bilder who presented these ideas in a way I couldn’t help thinking about and remixing. Cathy Marshall’s clear exposition, in Reading and Writing the Electronic Book was fundamental to digging deeper.

Tags: ,
Posted in books and reading, future of publishing, library and information science, scholarly communication, social semantic web | Comments (2)

6 quotes from Beyond the PDF – Annotations sessions

January 19th, 2011
by jodi

Moderator Ed Hovy picked out 6 quotes to summarize Beyond the PDF’s sessions on Annotation.

Papers are stories that persuade with data.

But as authors we are lazy and undisciplined.

Communicating between humans and humans and humans and machines.

I should be interested in ontologies, but I just can’t work up the enthusiasm.

Christmas tree of hyperlinks.

You will get sued.

Tags: , ,
Posted in future of publishing, information ecosystem | Comments (1)

“How does this make you feel?”

January 10th, 2011
by jodi

GetSatisfaction‘s “How does this make you feel?” intrigues me: why do people answer this? Conventional wisdom says that people don’t classify their posts.
GetSatisfaction asks How does this make you feel?
Presumably it’s polite to ask people how they’re doing — at least in some situations. And technically there’s no post classification going on here: it’s mood classification, which most of us are trained in from a young age.

Get Satisfaction aggregates the mood on each discussion thread:
Get Satisfaction's The Mood in Here

Tags: , , , , , ,
Posted in argumentative discussions, PhD diary, social web | Comments (2)

Wanted: the ultimate mobile app for scholarly ereading

January 7th, 2011
by jodi

Nicole Henning suggests that academic libraries and scholarly presses work together to create the ultimate mobile app for scholarly ereading. I think about the requirements a bit differently, in terms of the functional requirements.

The main functions are obtaining materials, reading them, organizing them, keeping them, and sharing them.

For obtaining materials, the key new requirement is to simplify authentication: handle campus authentication systems and personal subscriptions. Multiple credentialed identities should be supported. A secondary consideration is that RSS feeds (e.g. for journal tables of contents) should be supported.

For reading materials, the key requirement is to support multiple formats in the same application. I don’t know of a web app or mobile app that supports PDF, EPUB, and HTML. Reading interfaces matter: look to Stanza and Ibis Reader for best-in-class examples.

For organizing materials, the key is synergy between the user’s data and existing data. Allow tags, folders, and multiple collections. But also leverage existing publisher and library metadata. Keep it flexible, allowing the user to modify metadata for personal use (e.g. for consistency or personal terminology) and to optionally submit corrections.

For keeping materials, import, export, and sync content from the user’s chosen cloud-based storage and WebDAV servers. No other device (e.g. laptop or desktop) should be needed.

For sharing materials, support lightweight micropublishing on social networks and email; networks should be extensible and user-customizable. Sync to or integrate with citation managers and social cataloging/reading list management systems.

Regardless of the ultimate system, I’d stress that device independence is important, meaning that an HTML5 website would probably the place to start: look to Ibis Reader as a model.

Tags: , ,
Posted in books and reading, future of publishing, information ecosystem, library and information science, scholarly communication | Comments (5)

Searching for LaTeX code (Springer only)

January 6th, 2011
by jodi

Springer’s LaTeX search service (example results) allow searching for LaTeX strings or finding the LaTeX equations in an article. Since LaTeX is used to markup equations in many scientific publications this could be an interesting way to find related work or view an equation-centric summary of a paper.

You can provide a LaTeX string, and Springer says that besides exact matches they can return similar LaTeX strings:
exact matches to a LaTeX search

Or, you can search by DOI or title to get all the equations in a given publication:
results for a particular title

Under each equation in the search results you can click “show LaTeX code”:
show the LaTeX code for an equation
Right now it just searches Springer’s publications; Springer would like to add open access databases and preprint servers. Coverage even in Springer journals seems spotty: I couldn’t find two particular discrete math articles papers, so I’ve written Springer for clarification. As far as I can tell, there’s no way to get from SpringerLink to this LaTeX search yet: it’s a shame, because “show all equations in this article” would be useful, even with the proviso that only LaTeX equations were shown.

A nice touch is their sandbox where you can test LaTeX code, with a LaTeX dictionary conveniently below.

via Eric Hellman

Tags: , , , ,
Posted in future of publishing, information ecosystem, library and information science, math, scholarly communication | Comments (1)

Happy Public Domain Day!

January 2nd, 2011
by jodi

Today, in many countries around the world, new works become public property: January 1st every year is Public Domain Day. Material in the public domain can be used, remixed and shared freely — without violating copyright and without asking permission.

However, in the United States, not a single new work entered the public domain today. Americans must wait 8 more years: Under United States copyright law, nothing more will be added to the public domain until January 1, 2019.

Until the 1970’s the maximum copyright term was 56 years. Under that law, Americans would have been able to truly celebrate Public Domain Day:

  1. All works published in 1954 would be entering the public domain today.
  2. up to 85% of all copyrighted works from 1982 would be entering the public domain today. (Copyright Office and Duke).

Instead, only works published before 1923 are conclusively in the public domain in the U.S. today. What about post-1923 publications? It’s complicated: in the United States ((609 pages worth of complicated)).

For more information on Public Domain Day and the United States, Duke’s Center for the Study of the Public Domain has a series of useful pages.

Tags: , ,
Posted in books and reading, information ecosystem, intellectual freedom, library and information science | Comments (0)

Making provenance pay

December 19th, 2010
by jodi

Provenance, Dan Conover says, can drive the adoption of semantic technologies:

Imagine a global economy in which every piece of information is linked directly to its meaning and origin. In which queries produce answers, not expensive, time-consuming evaluation tasks. Imagine a world in which reliable, intelligent information structures give everyone an equal ability to make profitable decisions, or in many cases, profitable new information products. Imagine companies that get paid for the information they generate or collect based on its value to end users, rather than on the transitory attention it generates as it passes across a screen before disappearing into oblivion.

Now imagine copyright and intellectual property laws that give us practical ways of tracing the value of original contributions and collecting and distributing marginal payments across vast scales.

That’s the Semantic Economy.

– Dan Conover on the semantic economy (my emphasis added).
via Bora Zivkovic on Twitter

I wonder if he’s seen the W3 Provenance XG Final Report yet. Two parts are particularly relevant: the dimensions of provenance and the news aggregator scenario. Truly making provenance pay will require both Management of provenance (especially Access and Scale) and Content provenance around Attribution.

Go read the rest of what Dan Conover says about the semantic economy. Pay particular attention to the end: Dan says that he’s working on a functional spec for a Semantic Content Management System — a RDF-based middleware so easy that writers and editors will want to use it. I know you’re thinking of Drupal and of the Semantic Desktop; we’ll see how he’s differentiating: He invites further conversation.

I’m definitely going to have a closer look at his ideas: I like the way he thinks, and this isn’t the first time I’ve noticed his ideas for making Linked Data profitable.

Tags: , , , , , ,
Posted in future of publishing, information ecosystem, PhD diary, scholarly communication, semantic web | Comments (0)

For LaTeX referencing glitches, check the \label location

December 15th, 2010
by jodi

Problem: LaTeX gives the section number instead of the figure number in a text reference.
Solution: Be sure that the figure’s label is AFTER its caption.

Correct:

\begin{figure}
\includegraphics{./images/myimage.png}
\caption{A beautiful, wonderful image.}
\label{fig:myimage}
\end{figure}

Wrong:

\begin{figure}
\includegraphics{./images/myimage.png}
\label{fig:myimage}
\caption{A beautiful, wonderful image.}
\end{figure}

LaTeX requires \label to follow \caption. That is, a \label preceding a \caption is ignored.
If you’re getting section numbers instead of figure numbers as the response to a \ref, check where the \label is specified.

Tags: , , , , , , , , ,
Posted in PhD diary, random thoughts | Comments (0)

Let’s link the world’s metadata!

December 9th, 2010
by jodi

Together we can continue building a global metadata infrastructure. I am tasking you with helping. How can you do that?

For evangelists, practitioners, and consultants:

  • Thanks for bringing Linked Data to where it is today! We’re counting on you for even more yummy cross-disciplinary Linked Data!
  • What tools and applications are most urgently needed? Researchers and developers need to hear your use cases: please partner with them to share these needs!
  • How do you and your clients choose [terms, concepts, schemas, ontologies]? What helps the most?
  • Overall, what is working (and what is not)? How can we amplify what *is* working?

For Semantic Web researchers:

  • Build out the trust and provenance infrastructure.
  • Mature the query languages (e.g. SPARQL) [perhaps someone could say more about what this would mean?]
  • Building tools and applications for end-users is really important: value this work, and get to know some real usecases and end-users!

For information scientists:

  • How can we identify ‘universals’ across languages, disciplines, and cultures? Does the Colon classification help?
  • What are the best practices for sharing and reusing [terms, concepts, schemas, ontologies]? What is working and what is failing with metadata registries? What are the alternatives?

For managers, project leaders, and business people:

  • How do we create and justify the business case for Terminology services [like MIME types, library subject headings, New York Times Topics]?
  • Please collect and share your usage data! Do we need infrastructure for sharing usage data?
  • Share the economic and business successes of Linked Data!

That ends the call to action, but here’s where it comes from.

Yesterday Stuart Weibel gave a talk called “Missing Pieces in the Global Metadata Landscape” [slideshare] at InfoCom International Symposium in Tokyo. Stu asked 11 of us what those missing pieces were—with 3 questions: the conceptual issues, organizational impediments, and the most important overall issue. This last question, “What is the most important missing infrastructural link in establishing globally interoperable metadata systems?”, is my favorite, so I’ll talk about it a little further.

Stu summarizes that the infrastructure is mostly there, but that broad adoption (of standards, conventions, and common practice) is key. Overall these are the key issues he reports:

  • Tools to support and encourage the reuse of terms, concepts, schemas, ontologies (e.g., metadata registries, and more)
  • Widespread, cross-disciplinary adoption of a common metadata approach (Linked Data)
  • Query languages for the open web (SPARQL) are not fully mature
  • Trust and provenance infrastructure
  • Nothing’s missing… just use RDF, Linked Data, and the open web.  The key is broad adoption, and that requires better tools and applications. It’s a social problem, not a technical problem.
  • The ability to identify ‘universals’ across languages, disciplines, and cultures – revive Ranganathan’s facets?
  • Terminology services [like MIME types, library subject headings, New York Times Topics] have long been proposed as important services, but they are expensive to create, curate, and manage, and the economic models are weak
  • Stuff that does not work is often obvious. We need usage data to see what does work, and amplify it

You may notice, now, that the “call” looks a little familiar!

Tags: , , ,
Posted in information ecosystem, library and information science, semantic web | Comments (0)