A genre comprises a class of communicative events, the members of which share some set of communicative purposes. These purposes are recognized by the expert members of the parent discourse community and thereby constitute the rationale for the genre. This rationale shapes the schematic structure of the discourse and influences and constrains choice of content and style. Communicative purpose is both a privileged criterion and one that operates to keep the scope of a genre as here conceived narrowly focused on comparable rhetorical action. In addition to purpose, exemplars of a genre exhibit various patterns of similarity in terms of structure, style, content and intended audience. If all high probability expectations are realized, the exemplar will be viewed as prototypical by the parent discourse community. The genre names inherited and produced by discourse communities and imported by others constitute valuable ethnographic communication, but typically need further validation.
Tags: communication theory, discourse communities, genre
Posted in argumentative discussions, books and reading, information ecosystem | Comments (0)
Today I’m presenting a talk in the ISWC 2014 Workshop on Linked Science 2014—Making Sense Out of Data (LISC2014). The LISC2014 paper is joint work with Paolo Ciccarese, Tim Clark and Richard D. Boyce. Our goal is to make the evidence in a scientific knowledge base easier to access and audit — to make the knowledge base easier to maintain as scientific knowledge and drug safety regulations change. We are modeling evidence (data, methods, materials) from biomedical communications in the medication safety domain (drug-drug interactions).
The new architecture for the drug-drug interaction knowledge base is based on:
- the new, best-in-class argumentation ontology for scientific publishing, the Micropublication Ontology,described in Tim, Paolo, and Carole Goble‘s Journal of Biomedical Semantics article in July 2014 (doi:10.1186/2041-1480-5-28); or see the ontology itself)
- the Open Annotation Data Model for linking quotes (in our case to link claims, data, methods, and materials in Micropublications directly to the source text in biomedical papers and drug labels)
- the Drug Interaction Knowledge Base (DIKB) an existing, hand-constructed knowledge base of evidence about drug-drug interactions, including its evidence taxonomy
This is part of a 4-year National Library of Medicine project, “Addressing gaps in clinically useful evidence on drug-drug interactions” (1R01LM011838-01)
Abstract of our paper, “Using the Micropublications ontology and the Open Annotation Data Model to represent evidence within a drug-drug interaction knowledge base.”:
Semantic web technologies can support the rapid and transparent validation of scientific claims by interconnecting the assumptions and evidence used to support or challenge assertions. One important application domain is medication safety, where more efficient acquisition, representation, and synthesis of evidence about potential drug-drug interactions is needed. Exposure to potential drug-drug interactions (PDDIs), defined as two or more drugs for which an interaction is known to be possible, is a significant source of preventable drug-related harm. The combination of poor quality evidence on PDDIs, and a general lack of PDDI knowledge by prescribers, results in many thousands of preventable medication errors each year. While many sources of PDDI evidence exist to help improve prescriber knowledge, they are not concordant in their coverage, accuracy, and agreement. The goal of this project is to research and develop core components of a new model that supports more efficient acquisition, representation, and synthesis of evidence about potential drug-drug interactions. Two Semantic Web models—the Micropublications Ontology and the Open Annotation Data Model—have great potential to provide linkages from PDDI assertions to their supporting evidence: statements in source documents that mention data, materials, and methods. In this paper, we describe the context and goals of our work, propose competency questions for a dynamic PDDI evidence base, outline our new knowledge representation model for PDDIs, and discuss the challenges and potential of our approach.
Citation: Schneider, Jodi, Paolo Ciccarese, Tim Clark, and Richard D. Boyce. “Using the Micropublications ontology and the Open Annotation Data Model to represent evidence within a drug-drug interaction knowledge base.” Linked Science 2014 at ISWC 2014.
Tags: argumentation ontologies, evidence informatics, LISC2014 at ISWC2014, micropublications, ontologies, Open Annotation Data Model
Posted in information ecosystem, scholarly communication, semantic web | Comments (0)
Publishers from HighWire Press are experimenting with a plugin called SocialCite. This is intended to rate the evidence, citation by citation. Like this:
So far a few publishers (including PNAS) have implemented it as a pilot. Apparently the Journal of Bone and Joint Surgery is apparently leading this effort, I’d be really interested in speaking with them further:
Find out more about SocialCite from their website or the slidedeck from their debut at the HighwirePress meeting.
I’m *very* curious to hear what peopel think of this — it really surprised me.
Posted in argumentative discussions, future of publishing, information ecosystem | Comments (0)
Jason Priem has a wonderful slidedeck on how to smoothly transition from today’s practices in scientific communication to the future. Here is my reading of the argument given in Jason’s slides:
Communicating science is a central and essential part of doing science, and we have always used the best technology available.
Yet currently, there are several problems with journals, the primary form of scholarly communication.
Journal publication is
- Hard to innovate
- Restrictive format: function follows form
- Inconsistent quality control
These problems are fixable, if we realize that journals serve four traditional functions:
By decoupling these functions, into an a la carte publishing menu, we can fix the scholarly communication system. Decoupled scholarly outlets already exist. Jason mentions some outlets (I would say these mainly serve registration functions, maybe also dissemination ones):
- Math Overflow
- Faculty of 1000 Research
- the blag-o-sphere
Jason doesn’t mention here — but we could add to this list — systems for data publishing, e-science workflow, and open notebook science; these may fulfil registration and archiving functions. Also, among existing archiving systems, we could add the journal archiving functions of LOCKSS is the main player I’m familiar with.
To help with the certification functions, we have altmetrics tools like Impact Story (Jason’s Sloan Founded project with Heather Piwowar).
Jason’s argument well worth reading in full; it’s a well-articulated argument for decoupling journal functions, with some detailed descriptions of altmetrics. The core argument is very solid, and of wide interest: Unlike previous articulations for “pre-publication peer review”, this argument will make sense to everyone who believes in big data, I think. There are other formats: video of the talk and a draft article called “Decoupling the scholarly journal”.
Briefly noted in some of my earlier tweets.
Tags: altmetrics, decoupled journal, journal publishing, prepublication peer review
Posted in future of publishing, information ecosystem, scholarly communication | Comments (0)
Increasingly, I’m using Google Docs with collaborators. Yesterday, one of them pointed out the new “Research” search tab within Google Docs. (Tools->Research). I’m a bit surprised that your searches don’t show up on your collaborators’ screen. I’m particularly surprised that sharing searches doesn’t seem possible.
Google Docs' new 'Research' tab promotes search within Google Docs.
Apparently, it is pretty new. More at the Google Docs blog.
Tags: Google Docs, search
Posted in information ecosystem, random thoughts, scholarly communication | Comments (0)
One thing I can say about Kindle: error reporting is easier.
You report problems in context, by selecting the offending text. No need to explain where - just what the problem is.
Feedback receipt is confirmed, along with the next steps for how it will be used.
By contrast, to report problems to academic publishers, you often must fill out an elaborate form (e.g. Springer or Elsevier). Digging up contact information often requires going to another page (e.g. ACM.). Some make you *both* go to another page to leave feedback and then fill out a form (e.g. EBSCO). Do any academic publishers keep the context of what journal article or book chapter you’re reporting a problem with? (If so, I’ve never noticed!)
Tags: crowdsourcing, error reporting, kindle, publishing, typos
Posted in future of publishing, information ecosystem, library and information science | Comments (0)
Altmetrics is hitting its stride: 30 months after the Altmetrics manifesto, there are 6 tools listed. This is great news!
I tried out the beta of a new commercial tool, The Altmetric Explorer, from Altmetric.com. They are building on the success and ideas of the academic and non-profit community (but not formally associated with Altmetrics.org). The Altmetric Explorer gives overviews of articles and journals by the social media mentions. You can filter by publisher, journal, subject, source, etc. Altmetric Explore has a closed beta, but you can try the basic functionality on articles with their open tool, the PLoS Impact explorer.
"The default view shows the articles mentioned most frequently in all sources, from all journals. Various filters are available.
Rolling over the donut shows which sources (Twitter, blogs, ...) an article was mentioned in.
Sparklines can be used to compare journals.
A 'people' tab lets you look at individual messages. Rolling over the photo or avatar shows the poster's profile.
Altmetric.com seems largely aimed at publishers. This may add promotional noise, not unlike coercive citation, if it is used as an evaluation metric as they suggest:
Want to see which journals have improved their profile in social media or with a particular news outlet?
Their API is currently free for non-commercial use. Altmetric.com are crawling Twitter since July 2011 and focusing on papers with PubMed, arXiv, and DOI identifiers. They also get data from Facebook, Google+, and blogs, but they don’t disclose how. (I assume that blogs using ResearchBlogging code are crawled, for instance.)
Tags: Altmetric.com, altmetrics, Altmetrics.org
Posted in future of publishing, information ecosystem, random thoughts, scholarly communication, social web | Comments (0)
“Wikipedia discussions can thus be seen as a mirror of a stream of public consciousness, where those elements which are still not part of a shared consolidated heritage are object of a continuous negotiation among different points of view.”
There is No Deadline – Time Evolution of Wikipedia Discussions. (2012) Andreas Kaltenbrunner, David Laniado. arXiv:1204.3453v1
via summarizing it for Wikipedia Signpost, longer summary space on AcaWiki
Tags: points of view, Talk pages, Wikipedia
Posted in argumentative discussions, information ecosystem, PhD diary, social web | Comments (0)
>anyone with experiences and opinions about it?
Definitely worth trying–it focuses on your network in order to pull more interesting stuff to the fore. I put it in my bookmar bar when I first encountered it — it was briefly useful (slowed down the stream, found things that my network had heavily retweeted, making interesting suggestions of the few things I should read).
Its classification is ok — the genre classification seems decent (news/videos/pictures) — the message type classification (Question/Opinion/Notification/Check-In/How-To/etc) seems less exact, but may still be useful.
It kept suggesting the same things so I stopped checking it regularly — but I just checked it and am intrigued since they’ve added some features. In particular, they seem to be pulling out keywords (you can visualize one/all of people, topics, hashtags, message types–see screenshot). That might be especially interesting when doing exploratory searches.
There’s also a lot of customization possible — you can make your own rules for what to put in streams, and they have a wizard (screenshot below):
If there were a marketplace for sharing rules, that might be good — I’m not likely to spend time on customizing my own, so I’m just relying on the defaults (‘suggested for you’ and ‘popular’).
I’d be cautious of posting from Bottlenose without first checking the documentation — they accept posts of any length, but may also modify them (add hashtags, say).
I suppose for some people, the ability to pull in from multiple networks (for now Twitter & Facebook) could be useful, though there are lots of tools that do that.
I’d be curious to hear what other people think–have you found uses for Bottlenose?
PS-They seem to be going by klout score for invites for now; if you can’t get in that way, give me a shout (I’ve 10 invites if you want one).
I’m taking a listserv post as the source of a blog post again; channeling jrochkind I suppose.
Tags: Bottlenose, filtering, Klout, social media analysis, twitter
Posted in argumentative discussions, information ecosystem, social web | Comments (1)