Archive for the ‘future of publishing’ Category

Searching for LaTeX code (Springer only)

January 6th, 2011

Springer’s LaTeX search service (example results) allow searching for LaTeX strings or finding the LaTeX equations in an article. Since LaTeX is used to markup equations in many scientific publications this could be an interesting way to find related work or view an equation-centric summary of a paper.

You can provide a LaTeX string, and Springer says that besides exact matches they can return similar LaTeX strings:
exact matches to a LaTeX search

Or, you can search by DOI or title to get all the equations in a given publication:
results for a particular title

Under each equation in the search results you can click “show LaTeX code”:
show the LaTeX code for an equation
Right now it just searches Springer’s publications; Springer would like to add open access databases and preprint servers. Coverage even in Springer journals seems spotty: I couldn’t find two particular discrete math articles papers, so I’ve written Springer for clarification. As far as I can tell, there’s no way to get from SpringerLink to this LaTeX search yet: it’s a shame, because “show all equations in this article” would be useful, even with the proviso that only LaTeX equations were shown.

A nice touch is their sandbox where you can test LaTeX code, with a LaTeX dictionary conveniently below.

via Eric Hellman

Tags: , , , ,
Posted in future of publishing, information ecosystem, library and information science, math, scholarly communication | Comments (1)

Making provenance pay

December 19th, 2010

Provenance, Dan Conover says, can drive the adoption of semantic technologies:

Imagine a global economy in which every piece of information is linked directly to its meaning and origin. In which queries produce answers, not expensive, time-consuming evaluation tasks. Imagine a world in which reliable, intelligent information structures give everyone an equal ability to make profitable decisions, or in many cases, profitable new information products. Imagine companies that get paid for the information they generate or collect based on its value to end users, rather than on the transitory attention it generates as it passes across a screen before disappearing into oblivion.

Now imagine copyright and intellectual property laws that give us practical ways of tracing the value of original contributions and collecting and distributing marginal payments across vast scales.

That’s the Semantic Economy.

– Dan Conover on the semantic economy (my emphasis added).
via Bora Zivkovic on Twitter

I wonder if he’s seen the W3 Provenance XG Final Report yet. Two parts are particularly relevant: the dimensions of provenance and the news aggregator scenario. Truly making provenance pay will require both Management of provenance (especially Access and Scale) and Content provenance around Attribution.

Go read the rest of what Dan Conover says about the semantic economy. Pay particular attention to the end: Dan says that he’s working on a functional spec for a Semantic Content Management System — a RDF-based middleware so easy that writers and editors will want to use it. I know you’re thinking of Drupal and of the Semantic Desktop; we’ll see how he’s differentiating: He invites further conversation.

I’m definitely going to have a closer look at his ideas: I like the way he thinks, and this isn’t the first time I’ve noticed his ideas for making Linked Data profitable.

Tags: , , , , , ,
Posted in future of publishing, information ecosystem, PhD diary, scholarly communication, semantic web | Comments (0)

The Social Semantic Web – a message for scholarly publishers

November 15th, 2010

I always appreciate how Geoffrey Bilder can manage to talk about the Social Semantic Web and the early modern print in (nearly) the same breath. See for yourself in the presentation he gave to scholarly publishers at the International Society of Managing and Technical Editors last month.

Geoff’s presentation is outlined, to a large extent, in an interview Geoff gave 18 months ago (search “key messages” to find the good bits). I hope to blog further about these, because Geoff has so many good things to say, which deserve unpacking!

I especially love the timeline from slide 159, which shows that we’re just past the incunabula age of the Internet:

The Early Modern Internet

We're still in the Early Modern era of the Internet. Compare to the history of print.

Tags: , , , , , ,
Posted in future of publishing, information ecosystem, PhD diary, scholarly communication, semantic web, social semantic web, social web | Comments (3)

Accessing genomics workflows from Word documents with GenePattern

November 14th, 2010

What if you could rerun computational experiments from within a scientific paper?

The GenePattern add-on for Word for Windows integrates reusable genomic experiment pipelines into Microsoft Word. Readers can rerun the original or modified experiments from within the document by connecting to a GenePattern server.

Rerunning a pipeline inside Word

Rerunning a pipeline inside Word

I don’t run Windows, so I took this screenshot from a video produced at the Broad Institute of MIT and Harvard, where GenePattern is developed.

Readers without Word for Windows can also access the experimental pipelines by exporting them from the document: just run a GenePatternDocumentExtractor command from a GenePattern server. The GenePattern public server was very easy to access and start using. Here’s what the GenePatternDocumentExtractor command looks like:

Running GenePatternDocumentExtractor at the GenePattern public server

Running GenePatternDocumentExtractor at the GenePattern public server

Unfortunately the jobs I ran didn’t extract any pipelines from the Institute’s sample DOC. I’ve sent in an inquiry (either I’m doing something wrong or there’s a bug, either way it’s useful). I was very impressed that I could make my jobs public, then refer to them by URL in my email, to make clear what exactly I did.

The GenePattern add-on for Word is another find from the beyondthepdf list. Its development was funded by Microsoft. See also Accessible Reproducible Research by Jill P. Mesirov (Science, 327:415, 2010). doi:10.1126/science.1179653, which describes the underlying philosophy: have a Reproducible Research System (RRS) made up of an environment for doing computational work (the Reproducible Research Environment or RRE) and an authoring environment (the Reproducible Research Publisher or RRP) which links back to the research system.

Tags: , , , , , ,
Posted in books and reading, future of publishing, information ecosystem, scholarly communication | Comments (1)

Utopia Documents: pulling scientific data into the PDF for interactive exploration

November 14th, 2010

What if data were accessible within the document itself?

Utopia Documents is a free PDF viewer which recognizes certain enhanced figures, and fetches the underlying data. This allows readers to view and experiment with the tables, graphs, molecular structures, and sequences in situ.


You can download Utopia Documents for Mac and Windows to view enhanced papers, such as those published in The Semantic Biochemical Journal.

These screencasts were made from pages 9 and 10 of PDF of a paper by the Manchester-based Utopia team: T. K. Attwood, D. B. Kell, P. Mcdermott, J. Marsh, S. R. Pettifer, and D. Thorne. Calling international rescue: knowledge lost in literature and data landslide! Biochemical Journal, Dec 2009. doi:10.1042/BJ20091474.

In an interview at the Guardian, Utopia’s Phillip McDermott says:

“Utopia Documents links scientific research papers to the data and to the community. It enables publishers to enhance their publications with additional material, interactive graphs and models. It allow the reader to access a wealth of data resources directly from the paper they are viewing, makes private notes and start public conversations. It does all this on normal PDFs, and never alters the original file. We are targeting the PDF, since they still have around 80% readership over online viewing.

“Semantics, loose-coupling, fingerprinting and linked-data are the key ingredients. All the data is described using ontologies, and a plug-in system allows third parties to integrate their database or tool within a few lines of script. We use fingerprinting to allow us to recognise what paper a user is reading, and to spot duplicates. All annotations are held remotely, so that wherever you view a paper, the result is the same.”

I’d still like to see a demo of the commenting functionality.

I’d also be particularly interested in the publisher perspective, about the production work that goes into creating the enhancements. Portland Press’s October news announces that they’ve been promoting Utopia at the Charleston conference and SSP, with an upcoming appearance at the STM Innovations Seminar.

Utopia came to my attention via Steve Pettifer’s mention.

Tags: , , , , , , , , ,
Posted in future of publishing, information ecosystem, library and information science, scholarly communication, semantic web, social semantic web | Comments (4)

A Model-View-Controller perspective of scholarly articles

November 13th, 2010

A scholarly paper is not a PDF. A PDF is merely one view of a scholarly paper. To push ‘beyond the PDF’, we need design patterns that allow us to segregate the user interface of the paper (whether it is displayed as an aggregation of triples, a list of assertions, a PDF, an ePub, HTML, …) from the thing itself.

Towards this end, Steve Pettifer has a Model-View-Controller perspective on scholarly articles, which he shared in a post on the Beyond the PDF listserv, where discussions are leading up to a workshop in January. I am awe-struck: I wish I’d thought of this way of separating the structure and explaining it.

I think a lot of the disagreement about the role of the PDF can be put down to trying to overload its function: to try to imbue it with the qualities of both ‘model’ and ‘view’. … One of the things that software architects (and I suspect designers in general) have learned over the years is that if you try to give something functions that it shouldn’t have, you end up with a mess; if you can separate out the concerns, you get a much more elegant and robust solution.

My personal take on this is that we should keep these things very separate, and that if we do this, then many of the problems we’ve been discussing become more clearly defined (and I hope, many of the apparent contradictions, resolved).

So… a PDF (or come to that, an e-book version or a html page) is merely a *view* of an article. The article itself (the ‘model’) is a completely different (and perhaps more abstract) thing. Views can be tailored for a particular purpose, whether that’s for machine processing, human reading, human browsing, etc etc.

[paragraph break inserted]

The relationship between the views and their underlying model is managed by the concept of a ‘controller’. For example, if we represent an article’s model in XML or RDF (its text, illustrations, association nanopublications, annotations and whatever else we like), then that model can be transformed in to any number of views. In the case of converting XML into human-readable XHTML, there are many stable and mature technologies (XSLT etc). In the case of doing the same with PDF, the traditional controller is something that generates PDFs.

[paragraph break inserted]

The thing that’s been (somewhat) lacking so far is the two-way communication between view and model (via controller) that’s necessary to prevent the views from ossifying and becoming out of date (i.e. there’s no easy way to see that comments have been added to the HTML version of an article’s view if you happen to be reading the PDF version, so the view here can rapidly diverge from its underlying model).

[paragraph break inserted, link added]

Our Utopia software is an attempt to provide this two-way controller for PDFs. I believe that once you have this bidirectional relationship between view and model, then the actual detailed affordances of the individual views (i.e. what can a PDF do well / badly, what can HTML do well / badly) become less important. They are all merely means to channeling the content of an article to its destination (whether that’s human or machine).

The good thing about having this ‘model view controller’ take on the problem is that only the model needs to be pinned down completely …

Perhaps separating out our concerns in this way — that is, treating the PDF as one possible representation of an article — might help focus our criticisms of the current state of affairs? I fear at the moment we are conflating the issues to some degree.

– Steve Pettifer in a Beyond the PDF listserv post

I’m particularly interested in hearing if this perspective, using the MVC model, makes sense to others.

Tags: , , , , , , ,
Posted in books and reading, future of publishing, information ecosystem, library and information science, scholarly communication, social semantic web | Comments (9)

Ebook pricing fail (Springer edition)

November 1st, 2010

I wrote Springer to ask about buying an ebook that’s not in our university subscriptions. They sell the print copy at €62.95, but the electronic copy comes to €425, chapter by chapter.

Publishers: this is short-sighted (not to mention frustrating)–especially when your customers are looking for a portable copy of a book they already owns!

———- Forwarded message ———-
From: Springerlink, Support, Springer DE
Date: Fri, Oct 29, 2010 at 8:46 PM
Subject: WG: ebook pricing

Dear Jodi,

Thank you for your message.

On SpringerLink you can purchase online single journal articles and book chapters, but no complete ebooks.
eBooks are sold by Springer in topical eBook packages only.

with kind regards,
SpringerLink Support Team
eProduct Management & Innovation | SpringerLink Operations
support.springerlink@springer.com | + 49 (06221) 4878 743
www.springerlink.com

—–Original Message—–
From: Jodi Schneider
Sent: Thursday, October 28, 2010 5:09 PM
To: MetaPress Support
Subject: ebook pricing

Hi,

I’m interested in buying a copy of [redacted] as an ebook:
http://www.springerlink.com/content/[redacted]

This book has 17 chapters, which seem to be priced at 25 EUR each = 425 EUR.

But I could buy a print version, new at springer.com for 62.95 EUR:
http://www.springer.com/mathematics/book/[redacted]

Can you help me get the ebook at this price?
Thanks!
-Jodi

Tags: , , , ,
Posted in books and reading, future of publishing | Comments (3)

Quoted in Inside Higher Ed

July 17th, 2010

Earlier this week, Inside Higher Ed published an article about wikis in higher education. I’m quoted in connection with my work ((I used to be AcaWiki’s Community Liaison and now contribute summaries and help administer the wiki.)) with AcaWiki, which gathers summaries of research papers, books, etc.

The article was publicized with a tweet asking “Why haven’t #wikis revolutionized scholarship?

Of course, I’d rather ask “how have wikis impacted scholarship?” — though that’s less sexy! First, the largest impact is in technological infrastructure: it’s now commonplace to use collaborative, networked tools with built-in version control. (Though “wiki” isn’t what we’d use to describe Google Docs nor Etherpad or its many clones). Second, wikis are ubiquitous in research, if you look in the right places. (nLab, OpenWetWare, and numerous departmental wikis). Third, “revolutions” take time, and academia is essentially conservative and slow-moving. For instance, ejournals (~15 years old and counting) are only just starting to depart significantly from the paper form (with multimedia inclusions, storage of data and other, public comments, overlay  journals, post-publication peer-review, etc). Wikis have been used for teaching since roughly 2002 ((see e.g. Bergin, J. (2002). Teaching on the wiki web. In Proceedings of the 7th annual conference on Innovation and technology in computer science education (pp. 195-195). Aarhus, Denmark: ACM. doi:10.1145/544414.544473 and related source code)), meaning that academic wikis might be only about 8 years old at this point.

Other responses: Viva la wiki, says Brian Lamb, who was also interviewed for the article. Daniel Mietchen thinks big about the future of wikis for science.

.

Tags: , ,
Posted in future of publishing, higher education, information ecosystem, scholarly communication | Comments (0)

Funding Models for Books

July 17th, 2010

Paying for books per copy “developed in response to the invention of the printing press”, and a Readercon panel discussed some alternatives.

Existing alternatives, as noted in Cecilia Tan’s summary of the panel:

  • the donation model
  • the Kickstarter model
  • the “ransom” model
  • the subscription or membership model
  • the “perks” model
  • the merchandising model
  • the collectibles model
  • the company or support grant model
  • the voting model
  • the hits/pageviews model

Any synergies with Kevin Kelly’s Better than Free?

via HTLit’s Readercon overview

Tags: , ,
Posted in books and reading, future of publishing | Comments (0)

Locative texts

June 13th, 2010

A post at HLit got me thinking about locative hypertexts, which are meant to be read in a particular place.

Monday, Liza Daly shared an epub demo which pulls in the reader’s location, and makes decisions about the character’s actions based on movement. Think of it as a choose-your-own-adventure novel crossed with a geo-aware travel guide. It’s a brief proof-of-concept, and the most exciting part is that the code is free for the taking under the very permissive (GPL + commercial-compatible) MIT License. Thanks, Liza and Threepress for lowering barriers to experimentation with ebooks!

‘Locative hypertexts’ also bring to mind GPS-based guidebooks as envisioned in the 2007 Editus video ‘Possible ou probable…?’ ((Editus’ copy of the video)):

Tim McCormick summarizes:

In the 9-minute video, we get mouth-watering, partly tongue-in-cheek scenes of continental Europe’s quality-of-life — fantastic trains & pedestrian streetscapes,independent bookstores, delicious food, world-class museums, weekend getaway to Bruges, etc.– as the movie follows a couple through a riotous few days of E-book high living.

On their fabulously svelte, Kindle 2-like devices, they

  • read and purchase novels
  • enjoy reading on the beach
  • get multimedia museum guides
  • navigate foreign cities with ease
  • stay in multimedia contact with friends and family
  • collaborate with colleagues on shared virtual desktops while at sidewalk cafes
  • see many hi-resolution Breughel paintings online and off that I’m dying to see myself

etc.

Multimedia guidebooks ((e.g. the Lonely Planet city guide series for iPhone)) are approaching this vision. Combine them with (also-existing) turn-by-turn directions, and connectivity and privacy will be the largest remaining obstacles.

So then what about location-based storytelling? I got to thinking about the iPhone apps I’ve already encountered, which are intended for use in particular places:

  • Walking Cinema: Murder on Beacon Hill – a murder mystery/travel series based in Boston (available as an iPhone app and podcast).
  • Museum of the Phantom City: Other Futures – a multimedia map/alternate history of NYC architecture, described as a way to “see the city that could have been”. It maps never-built structures envisioned by Buckminster Fuller, Gaudi, and others – ideally while you’re “standing on the projects’ intended sites”.
  • Museum of London: Streetmuseum, true history of London in photos, meant for use on the streets
  • Historic Earth, has historical maps which could be interesting settings for historical locative storytelling

Tags: , , ,
Posted in books and reading, future of publishing, information ecosystem, iOS: iPad, iPhone, etc. | Comments (0)