Archive for the ‘future of publishing’ Category

Amplify your conference with an iPhone app

March 26th, 2010

via Gene Golovchinsky, I learned of an iphone app for CHI2010. What a great way to amplify the conference! Thanks to Justin Weisz and the rest of the CMU crew.

I was happy to browse the proceedings while lounging. The papers I mark show up in my personal schedule and in a reading list.

Paper viewPersonalized conference schedule, generated from my selections
I think it’s an attractive alternative to making a paper list by hand, using some conferences’ clunky online scheduling tool, or circling events in large conference handouts. If you keep an iPhone/iPod in your pocket, the app could be used during the conference, but I might also want to print out my sessions on an index card. So exporting the list would be a good enhancement: in addition to printing, I’d like to send the list of readings directly to Zotero (or another bibliographic manager).

The advance program embedded on the conference website still has some advantages: it’s easier to find out more about session types (e.g. alt.chi). Courses and workshops stand out online, too.

map of conference locationssearching the proceedings

Wayfinding is hard in on-screen PDFs, so I hope that in the long run scholarly proceedings become more screen-friendly. While at present I find an iPhone appealing for reading fiction, on-screen scholarly reading is harder: for one thing, it’s not linear.

I’d like to see integrated, reader-friendly environments for conference proceedings, with full-text papers. I envision moving seamlessly between the proceedings and an offline reading environment. Publishers can already support offline reading on a wide variety of smartphones: the HTML5-based Ibis Reader uses ePub, a standard based on xHTML and CSS. There’s no getting around the download step, but an integrated environment can be “download first, choose later”. I’ve never had much luck with CD-ROM and USB-based conference proceedings, except in pulling off 2-3 PDFs of papers to read later.

Tags: , , , ,
Posted in future of publishing, information ecosystem, iOS: iPad, iPhone, etc., scholarly communication | Comments (0)

Code4Lib Journal: A Reminisce

March 23rd, 2010

The Code4Lib Journal published issue 9 today. It’s a bittersweet day for me, because today also marks the end of my editorship on the Journal. I helped found the Journal, thinking when I signed on that I could just do a little copyediting. Along the way, I’ve taken a turn at many tasks (regrettably, I postponed taking a turn at Coordinating Editor too long).

The Journal published issue 1 in December 2007, but work started in April that year. From the beginning, Jonathan Rochkind served as a moving force. His post “Code4Lib journal idea revival?” ((April 11, 2007 to Code4Lib listserv)) generated a number of responses, in part because he made it sound so easy:

So pretty much all we would need is:

1) An editorial committee or whatever. [Maybe some people imagined some
more ‘revolutionary’ egalitarian type of community process, but I figure
keep it simple, and an editorial committee seems simple, and also
provides some people who have explicitly taken responsibility for
getting things done.]
2) A place to host it. [maybe some kind of “institutional repository”
software would be cool, but in a pinch seems to me a WordPress
installation would do. Keep things simple and do-able and good enough is
my motto. I’m sure one of our institutions would donate server
space/cycles for a WordPress installation for such a journal. ]
3) Maybe a wiki would be nice for editorial commitee discussions.
4) Maybe a simple one page description of the mission of the journal and
what the journal is looking for in articles. The editorial committee can
work on that on the hypothetical wiki.
5) Some articles. The editorial committee can solicit some for the first
‘issue’.

Step 6: Profit! I mean, some e-published articles. No profit, sorry.

After that post, 10 of us stepped forward to decide how to get the Journal off the ground. It surprised me how easy some things were: hosting (thanks ibiblio!), getting an ISSN, finding a sysadmin (the incomparable Jonathan Brinley)…

I spoke at Code4Lib2008, my first Code4Lib conference, due to Jonathan Brinley’s interest in sharing our publishing methods and Jonathan Rochkind’s encouragement. While we looked at other systems, we chose WordPress as a platform, for its simplicity and its customizability. Jonathan Brinley had put in a proposal to Code4Lib2008 to talk about the Journal’s customizations ((The customizations are documented on the Code4Lib wiki, part of a category about the Code4Lib Journal.)) He graciously shared the podium with me and Ed Corrado to co-present “The Making of the Code4Lib Journal

Since then, the Journal has gone CC-BY (thanks to DOAJ’s prodding and to qualify for the SPARC Europe Seal for Open Access Journals) and agreed to indexing in EBSCO. We’ve published numerous articles (73 + 9 editorials, if I’ve got the count right), from authors on at least 3 continents. All in all, a great first couple years!

While I’m sad to be leaving the Journal, I’m delighted to have been a part of it. A strong Editorial Committee, with new blood in the form of 5 new editors, makes it easier to pull back from this project. As Tom Keays said when introducing issue 7: Code4Lib Journal, Long May You Run!

Tags: ,
Posted in future of publishing, scholarly communication | Comments (1)

Google Books settlement: a monopoly waiting to happen

October 10th, 2009

Will Google Books create a monopoly? Some ((“Several European nations, including France and Germany, have expressed concern that the proposed settlement gives Google a monopoly in content. Since the settlement was the result of a class action against Google, it applies only to Google. Other companies would not be free to digitise books under the same terms.” (bolding mine) – Nigel Kendall, Times (UK) Online, Google Book Search: why it matters )) people think ((“Google’s five-year head start and its relationships with libraries and publishers give it an effective monopoly: No competitor will be able to come after it on the same scale. Nor is technology going to lower the cost of entry. Scanning will always be an expensive, labor-intensive project.” (bolding mine) – Geoffrey Nunberg, Chronicle of Higher Education, Google’s Book Search: A Disaster for Scholars (pardon the paywall))) so. Brin claims it won’t:

If Google Books is successful, others will follow. And they will have an easier path: this agreement creates a books rights registry that will encourage rights holders to come forward and will provide a convenient way for other projects to obtain permissions.

-Sergey Brin, New York Times, A Library To Last Forever

Brin is wrong: the proposed Google Books settlement will not smooth the way for other digitization projects. It creates a red carpet for Google while leaving everyone else at risk of copyright infringement.

The safe harbor provisions apply only to Google. Anyone else who wants to use one of these books would face the draconian penalties of statutory copyright infringement if it turned out the book was actually still copyrighted. Even with all this effort, one will not be able to say with certainty that a book is in the public domain. To do that would require a legislative change – and not a negotiated settlement.

– Peter Hirtle, LibraryLawBlog: The Google Book Settlement and the Public Domain.

Monopoly is not the only risk. Others include ((Of course there are lots of benefits, too!)) reader privacy, access to culture, suitability for bulk and some research users (metadata, etc.). Too bad Brin isn’t acknowledging that!

Don’t know what all the fuss is with Google Books and the proposed settlement? Wired has a good outline from April.

Tags: , , , ,
Posted in books and reading, future of publishing, information ecosystem, intellectual freedom, library and information science | Comments (1)

Paper as a Social Object: “creating conversations, collecting scribbles, instigating adventures”

June 19th, 2009

I love it when paper and digital formats are both used for what they do best. Like the Incidental:
“The Incidental is [a] feedback loop made out of paper and human interactions – timebound, situated and circulating in a place.” [Schulz and Webb]

annotated incidental 4/25/09

annotated incidental 4/25/09

“Over in Milan at the Salone di Mobile they’ve created a thing called The Incidental. It’s like a guide to the event but it’s user generated and a new one is printed every day. When I say user generated, I mean that literally. People grab the current day’s copy and scribble on it. So they annotate the map with their personal notes and recommendations. Each day the team collect the scribbled on ones, scan them in and print an amalgamated version out again. You have to see it, to get it. But it’s great to see someone doing something exciting with ‘almost instant’ printing and for a real event and a real client too.

The actual paper is beautiful and very exciting. It has a fabulous energy that has successfully migrated from the making of the thing to the actual thing. Which is also brilliant and rare. [Ben Terrett as quoted by Schulz and Webb]

The Incidental was created at and for Milan’s furniture/design fair with funding by The British Council.

Tags: , , , , ,
Posted in future of publishing, information ecosystem | Comments (0)

JCDL 2009 Poster Session in Second Life

June 18th, 2009

Last night I popped into Second Life for a poster session. JCDL 2009 is going on in Austin this week, and several of the posters were on display in the Digital Preserve region of SL. Chris Beer asked for some screenshots.

Here’s the whole poster space from outside. (Click each image for the ginormous full-size screenshot.)
Poster Session Entrance
My avatar (TR Telling) is in a bright orange UIUC GSLIS T-shirt, thanks to a class tour Richard Urban led last year. With a closer look, you can spot the screen that was used to project MinuteMadness.

Here are two posters, “Finding Centuries-Old Hyperlinks” and “Toward Automatic Generation of Image-Text Document Surrogates to Optimize Cognition”.
Two Posters: "Finding Centuries-Old Hyperlinks" and "Toward Automatic Generation of Image-Text Document Surrogates to Optimize Cognition"Poster numbers were used for the best poster competition, I believe.

Large text-sizes really help viewing from afar; deft users can get a closer view with ‘mouse look’. I took a second screenshot of the “Finding Centuries-Old Hyperlinks” poster since it was my favorite. Xiaoyue (Elaine) Wang and Eamonn Keogh suggest cross-referencing manuscript pages using icon similarity.
Closer View of "Finding Centuries-Old Hyperlinks"Handouts could be really useful for a SL poster session — I had to settle for taking screenshots. Clicking on the poster could give a copy of the poster, which could include links to more information. A mailbox could facilitate sending messages to the presenters.

One presenter ‘attended’ from New York. Several people are gathered around her poster, which generated a lot of discussion.
postertalk
In the left corner you can see one of the more visually striking posters, a study of LIS students’ impressions of the Kindle, after using it for something like 3 weeks.

To the right of the entrance is a sign that says “What did you think?”, which linked to a comment form to be completed on the Web. I succeeded at that box, but wasn’t able to figure out how to submit a second, in-world comment form.

My avatar is just stepping down from a rotating lazy-susan which held a striking comment box. Getting a comment form and filling it out was straightforward. However, dragging and dropping the form back onto the box, as suggested, didn’t work for me.

I had several interesting conversations, most notably a chat outside in the Poster Garden with Javier Velasco Martin who helped build and furnish the Preserve. Ed Fox was easily identifiable: his avatar’s first name is EdFox. For social gatherings, handles are useful, but for professional gatherings it can be reassuring to know who you’re talking with.

Here’s one last look at the dome from the outside. I love the bright aqua JCDL lettering. And, what trip to Second Life would be complete without some flying?
Flying by the JCDL Poster Session Dome With a closer look, you can see the large comment box in the center of the dome.

Tags: , , , ,
Posted in computer science, future of publishing, higher education, library and information science | Comments (1)

Stop Intellectual Apartheid

March 30th, 2009

A call to action from BYU English professor Gideon Burton: Stop intellectual apartheid!

Let me illustrate how academic institutions enforce Intellectual Apartheid through a simple experiment you can perform right now. Let’s say that you are researching lingering effects of South Africa’s apartheid and you discovered (as I did using Google Scholar) a recent article, “Fantasmatic Transactions: On the Persistence of Apartheid Ideology” (published in Subjectivity in July, 2008 by D. Hook). Now for the experiment: click on this link to the full text of the article.

One of two things just occurred. Either you just gained immediate access to a PDF version of the full article; or, more likely, an authentication window popped up requesting your login credentials. It turns out that Palgrave-Macmillan publishes Subjectivity, and through their website one can get access to this article for a mere $30. Alternatively, one may subscribe to the journal for $503 per year.

You really don’t need to go to the developing world to recognize that advanced knowledge is a big club with stiff entrance fees. Even middle class Americans will think twice before throwing down $30 for a scholarly article. How likely will this knowledge ever reach scholars in Mexico or India? And just how broadly can the editors of Subjectivity expect it to reach when subscribing costs $503/year?

Gideon also gives suggestions for scholars, librarians, and administrators.

via Cameron Neylon on friendfeed

Posted in future of publishing, higher education, information ecosystem, library and information science | Comments (1)

Newspapers in an Age of Revolution (aka The Internet as an Agent of Change)

March 15th, 2009

Clay Shirky writes of newspapers in an age of revolution: 15 years of anticipated problems* viewed optimistically, patched with one-size-fits-all solutions. Those solutions don’t attack the main issue: “the core problem publishing solves — the incredible difficulty, complexity, and expense of making something available to the public — has stopped being a problem.” It’s a revolution, he says, drawing on the print revolution of the early 1400s, and no one knows what will happen.

The old stuff gets broken faster than the new stuff is put in its place. The importance of any given experiment isn’t apparent at the moment it appears; big changes stall, small changes spread. Even the revolutionaries can’t predict what will happen. Agreements on all sides that core institutions must be protected are rendered meaningless by the very people doing the agreeing. (Luther and the Church both insisted, for years, that whatever else happened, no one was talking about a schism.) Ancient social bargains, once disrupted, can neither be mended nor quickly replaced, since any such bargain takes decades to solidify.

And so it is today. When someone demands to know how we are going to replace newspapers, they are really demanding to be told that we are not living through a revolution. They are demanding to be told that old systems won’t break before new systems are in place. They are demanding to be told that ancient social bargains aren’t in peril, that core institutions will be spared, that new methods of spreading information will improve previous practice rather than upending it. They are demanding to be lied to.

There are fewer and fewer people who can convincingly tell such a lie.

Shirky sees the future of journalism as “overlapping special cases” with a variety of funding and business models. It’s a time for experimentation, and while he sees failure and risk, he has hope, too:

Many of these models will fail. No one experiment is going to replace what we are now losing with the demise of news on paper, but over time, the collection of new experiments that do work might give us the reporting we need.

Society needs reporting, not newspapers. That need is real, and worth restating:

Society doesn’t need newspapers. What we need is journalism. For a century, the imperatives to strengthen journalism and to strengthen newspapers have been so tightly wound as to be indistinguishable. That’s been a fine accident to have, but when that accident stops, as it is stopping before our eyes, we’re going to need lots of other ways to strengthen journalism instead.

When we shift our attention from ’save newspapers’ to ’save society’, the imperative changes from ‘preserve the current institutions’ to ‘do whatever works.’ And what works today isn’t the same as what used to work.

Go read the whole essay, then let it stew with other thoughts on the future of publishing.

*Circa 1993: “When a 14 year old kid can blow up your business in his spare time, not because he hates you but because he loves you, then you got a problem.”

Via John Dupuis’ post in Confessions of a Science Librarian.

Tags: , , , , ,
Posted in future of publishing | Comments (0)

The News Ecosystem

March 14th, 2009

Yesterday, Steven Berlin Johnson spoke at SXSW about the information ecosystem and the future of news. Fortunately, for those of us playing at home, he blogged a transcript.

Johnson adds international and war reporting to investigative reporting as the areas at risk due to the implosion of news funding. Johnson envisions a bright future in other areas, citing a well-developed information ecosystem in technology, and comparing coverage of the 2008 and 1992 U.S. Presidential elections.

Extending his ecosystem metaphor, Johnson introduces technology journalism as the “old-growth forest” of web journalism. Ecologists use (real-world) old growth “to research natural ecosystems”, so by extension, Johnson says, “it’s much more instructive to anticipate the future of investigative journalism by looking at the past of technology journalism”. While this argument holds no water, it’s certainly suggestive.

in the long run, we’re going to look back at many facets of old media and realize that we were living in a desert disguised as a rain forest. … most of what we care about in our local experience lives in the long tail. We’ve never thought of it as a failing of the newspaper that its metro section didn’t report on a deli closing, because it wasn’t even conceivable that a big centralized paper could cover an event with such a small radius of interest.

But of course, that’s what the web can do. … As we get better at organizing all that content – both by selecting the best of it, and by sorting it geographically – our standards about what constitutes good local coverage are going to improve.

As Johnson envisions, “Five years from now, if someone gets mugged within a half mile of my house, and I don’t get an email alert about it within three hours, it will be a sign that something is broken.”.
This is all by way of introduction to his new company, outside.in, which provides geographic search and alerting.

Johnson concludes, in part, by examining the filtering problem, and turning it into an opportunity:

Now there’s one objection to this ecosystems view of news that I take very seriously. It is far more complicated to navigate this new world than it is to sit down with your morning paper. There are vastly more options to choose from, and of course, there’s more noise now. For every Ars Technica there are a dozen lame rumor sites that just make things up with no accountability whatsoever. I’m confident that I get far more useful information from the new ecosystem than I did from traditional media along fifteen years ago, but I pride myself on being a very savvy information navigator. Can we expect the general public to navigate the new ecosystem with the same skill and discretion?

Johnson expects (future) newspapers to function as filters, aiding the public in getting the news:

Information Ecosystem, as envisioned by Steven Berlin Johnson

Information Ecosystem, as envisioned by Steven Berlin Johnson

Johnson does not address who’s going to pay for the filtering. He’s ready for a new model, but leaves that to the industry to discover for itself. “Measured by pure audience interest, newspapers have never been more relevant.” When he acknowledges the short-term pain of the newspaper industry today, he worries:

we’re going to spend so much time trying to figure out how to keep the old model on life support that we won’t be able to help invent a new model that actually might work better for everyone. The old growth forest won’t just magically grow on its own, of course, and no doubt there will be false starts and complications along the way.

The entire transcript is well worth a read.

Via Steven Johnson on twitter.

Tags: , , , ,
Posted in future of publishing, information ecosystem | Comments (1)

Somebody’s Got to Pay (for Investigative Reporting)

March 7th, 2009

Timothy Burke is my new hero. The death* of newspapers, he says, is a problem mainly because somebody’s got to pay for investigative reporting:

We don’t need newspapers to have film criticism or editorial commentary or consumer analysis of automobiles or comic strips or want ads or public records. It might be that existing online provision of those kinds of information could use serious improvement or has issues of its own. It might be that older audiences don’t know where to find some of that information, or have trouble consuming it in its online form. But there’s nothing that makes published newspapers or radio programming inherently superior at providing any of those functions, and arguably many things that make them quite inferior to the potential usefulness of online media. So throw the columnists and the reviewers and the lifestyle reporters off the newspaper liferaft.

So it comes down to independent, sustained investigation of public affairs. The argument that online media cannot provide this function comes down to money

Burke gives more details and examples, and calls for new funding models, including philanthropic and/or foundation money. He concludes that the “The end of the newspaper model of the last century doesn’t have to be the end of independent investigative reporting.”

Go read the whole thing.
*It seems like death and rebirth, to me, especially with some major newspapers reinventing themselves online. But that’s another matter.

Burke first came to my attention last year, from a talk he gave to the LC Working Group on the Future of Bibliographic Control at March’s meeting on the Users and Uses of Bibliographic Data. Burke represented and reflected upon the user perspective, as an academic who searches catalogs outside his area of expertise.

Via John Dupuis’s friendfeed.

Tags: , , , , , ,
Posted in future of publishing, intellectual freedom | Comments (1)