Publishers from HighWire Press are experimenting with a plugin called SocialCite. This is intended to rate the evidence, citation by citation. Like this:
So far a few publishers (including PNAS) have implemented it as a pilot. Apparently the Journal of Bone and Joint Surgery is apparently leading this effort, I’d be really interested in speaking with them further:
Find out more about SocialCite from their website or the slidedeck from their debut at the HighwirePress meeting.
I’m *very* curious to hear what peopel think of this — it really surprised me.
Posted in argumentative discussions, future of publishing, information ecosystem | Comments (0)
Jason Priem has a wonderful slidedeck on how to smoothly transition from today’s practices in scientific communication to the future. Here is my reading of the argument given in Jason’s slides:
Communicating science is a central and essential part of doing science, and we have always used the best technology available.
Yet currently, there are several problems with journals, the primary form of scholarly communication.
Journal publication is
- Hard to innovate
- Restrictive format: function follows form
- Inconsistent quality control
These problems are fixable, if we realize that journals serve four traditional functions:
By decoupling these functions, into an a la carte publishing menu, we can fix the scholarly communication system. Decoupled scholarly outlets already exist. Jason mentions some outlets (I would say these mainly serve registration functions, maybe also dissemination ones):
- Math Overflow
- Faculty of 1000 Research
- the blag-o-sphere
Jason doesn’t mention here — but we could add to this list — systems for data publishing, e-science workflow, and open notebook science; these may fulfil registration and archiving functions. Also, among existing archiving systems, we could add the journal archiving functions of LOCKSS is the main player I’m familiar with.
To help with the certification functions, we have altmetrics tools like Impact Story (Jason’s Sloan Founded project with Heather Piwowar).
Jason’s argument well worth reading in full; it’s a well-articulated argument for decoupling journal functions, with some detailed descriptions of altmetrics. The core argument is very solid, and of wide interest: Unlike previous articulations for “pre-publication peer review”, this argument will make sense to everyone who believes in big data, I think. There are other formats: video of the talk and a draft article called “Decoupling the scholarly journal”.
Briefly noted in some of my earlier tweets.
Tags: altmetrics, decoupled journal, journal publishing, prepublication peer review
Posted in future of publishing, information ecosystem, scholarly communication | Comments (0)
Increasingly, I’m using Google Docs with collaborators. Yesterday, one of them pointed out the new “Research” search tab within Google Docs. (Tools->Research). I’m a bit surprised that your searches don’t show up on your collaborators’ screen. I’m particularly surprised that sharing searches doesn’t seem possible.
Google Docs' new 'Research' tab promotes search within Google Docs.
Apparently, it is pretty new. More at the Google Docs blog.
Tags: Google Docs, search
Posted in information ecosystem, random thoughts, scholarly communication | Comments (0)
One thing I can say about Kindle: error reporting is easier.
You report problems in context, by selecting the offending text. No need to explain where - just what the problem is.
Feedback receipt is confirmed, along with the next steps for how it will be used.
By contrast, to report problems to academic publishers, you often must fill out an elaborate form (e.g. Springer or Elsevier). Digging up contact information often requires going to another page (e.g. ACM.). Some make you *both* go to another page to leave feedback and then fill out a form (e.g. EBSCO). Do any academic publishers keep the context of what journal article or book chapter you’re reporting a problem with? (If so, I’ve never noticed!)
Tags: crowdsourcing, error reporting, kindle, publishing, typos
Posted in future of publishing, information ecosystem, library and information science | Comments (0)
Altmetrics is hitting its stride: 30 months after the Altmetrics manifesto, there are 6 tools listed. This is great news!
I tried out the beta of a new commercial tool, The Altmetric Explorer, from Altmetric.com. They are building on the success and ideas of the academic and non-profit community (but not formally associated with Altmetrics.org). The Altmetric Explorer gives overviews of articles and journals by the social media mentions. You can filter by publisher, journal, subject, source, etc. Altmetric Explore has a closed beta, but you can try the basic functionality on articles with their open tool, the PLoS Impact explorer.
"The default view shows the articles mentioned most frequently in all sources, from all journals. Various filters are available.
Rolling over the donut shows which sources (Twitter, blogs, ...) an article was mentioned in.
Sparklines can be used to compare journals.
A 'people' tab lets you look at individual messages. Rolling over the photo or avatar shows the poster's profile.
Altmetric.com seems largely aimed at publishers. This may add promotional noise, not unlike coercive citation, if it is used as an evaluation metric as they suggest:
Want to see which journals have improved their profile in social media or with a particular news outlet?
Their API is currently free for non-commercial use. Altmetric.com are crawling Twitter since July 2011 and focusing on papers with PubMed, arXiv, and DOI identifiers. They also get data from Facebook, Google+, and blogs, but they don’t disclose how. (I assume that blogs using ResearchBlogging code are crawled, for instance.)
Tags: Altmetric.com, altmetrics, Altmetrics.org
Posted in future of publishing, information ecosystem, random thoughts, scholarly communication, social web | Comments (0)
“Wikipedia discussions can thus be seen as a mirror of a stream of public consciousness, where those elements which are still not part of a shared consolidated heritage are object of a continuous negotiation among different points of view.”
There is No Deadline – Time Evolution of Wikipedia Discussions. (2012) Andreas Kaltenbrunner, David Laniado. arXiv:1204.3453v1
via summarizing it for Wikipedia Signpost, longer summary space on AcaWiki
Tags: points of view, Talk pages, Wikipedia
Posted in argumentative discussions, information ecosystem, PhD diary, social web | Comments (0)
>anyone with experiences and opinions about it?
Definitely worth trying–it focuses on your network in order to pull more interesting stuff to the fore. I put it in my bookmar bar when I first encountered it — it was briefly useful (slowed down the stream, found things that my network had heavily retweeted, making interesting suggestions of the few things I should read).
Its classification is ok — the genre classification seems decent (news/videos/pictures) — the message type classification (Question/Opinion/Notification/Check-In/How-To/etc) seems less exact, but may still be useful.
It kept suggesting the same things so I stopped checking it regularly — but I just checked it and am intrigued since they’ve added some features. In particular, they seem to be pulling out keywords (you can visualize one/all of people, topics, hashtags, message types–see screenshot). That might be especially interesting when doing exploratory searches.
There’s also a lot of customization possible — you can make your own rules for what to put in streams, and they have a wizard (screenshot below):
If there were a marketplace for sharing rules, that might be good — I’m not likely to spend time on customizing my own, so I’m just relying on the defaults (‘suggested for you’ and ‘popular’).
I’d be cautious of posting from Bottlenose without first checking the documentation — they accept posts of any length, but may also modify them (add hashtags, say).
I suppose for some people, the ability to pull in from multiple networks (for now Twitter & Facebook) could be useful, though there are lots of tools that do that.
I’d be curious to hear what other people think–have you found uses for Bottlenose?
PS-They seem to be going by klout score for invites for now; if you can’t get in that way, give me a shout (I’ve 10 invites if you want one).
I’m taking a listserv post as the source of a blog post again; channeling jrochkind I suppose.
Tags: Bottlenose, filtering, Klout, social media analysis, twitter
Posted in argumentative discussions, information ecosystem, social web | Comments (1)