Archive for the ‘scholarly communication’ Category

Evidence Informatics

January 20th, 2015

I sent off my revised abstract to ECA Lisbon 2015, the European Conference on Argumentation. Evidence informatics, in 75 words:

Reasoning and decision-making are common throughout human activity. Increasingly, human reasoning is mediated by information technology, either to support collective action at a distance, or to support individual decision-making and sense-making.

We will describe the nascent field of “evidence informatics”, which considers how to structure reasoning and evidence. Comparing and contrasting evidence support tools in different disciplines will help determine reusable underlying principles, shared between fields such as legal informatics, evidence-based policy, and cognitive ergonomics.

Tags: , , , , , ,
Posted in argumentative discussions, information ecosystem, random thoughts, scholarly communication | Comments (0)

Linked Science 2014 paper: Using the Micropublications ontology and the Open Annotation Data Model to represent evidence within a drug-drug interaction knowledge base

October 19th, 2014

Today I’m presenting a talk in the ISWC 2014 Workshop on Linked Science 2014—Making Sense Out of Data (LISC2014). The LISC2014 paper is joint work with Paolo Ciccarese, Tim Clark and Richard D. Boyce. Our goal is to make the evidence in a scientific knowledge base easier to access and audit — to make the knowledge base easier to maintain as scientific knowledge and drug safety regulations change. We are modeling evidence (data, methods, materials) from biomedical communications in the medication safety domain (drug-drug interactions).

The new architecture for the drug-drug interaction knowledge base is based on:

This is part of a 4-year National Library of Medicine project, “Addressing gaps in clinically useful evidence on drug-drug interactions” (1R01LM011838-01)

Abstract of our paper, “Using the Micropublications ontology and the Open Annotation Data Model to represent evidence within a drug-drug interaction knowledge base.”:

Semantic web technologies can support the rapid and transparent validation of scientific claims by interconnecting the assumptions and evidence used to support or challenge assertions. One important application domain is medication safety, where more efficient acquisition, representation, and synthesis of evidence about potential drug-drug interactions is needed. Exposure to potential drug-drug interactions (PDDIs), defined as two or more drugs for which an interaction is known to be possible, is a significant source of preventable drug-related harm. The combination of poor quality evidence on PDDIs, and a general lack of PDDI knowledge by prescribers, results in many thousands of preventable medication errors each year. While many sources of PDDI evidence exist to help improve prescriber knowledge, they are not concordant in their coverage, accuracy, and agreement. The goal of this project is to research and develop core components of a new model that supports more efficient acquisition, representation, and synthesis of evidence about potential drug-drug interactions. Two Semantic Web models—the Micropublications Ontology and the Open Annotation Data Model—have great potential to provide linkages from PDDI assertions to their supporting evidence: statements in source documents that mention data, materials, and methods. In this paper, we describe the context and goals of our work, propose competency questions for a dynamic PDDI evidence base, outline our new knowledge representation model for PDDIs, and discuss the challenges and potential of our approach.

Citation: Schneider, Jodi, Paolo Ciccarese, Tim Clark, and Richard D. Boyce. “Using the Micropublications ontology and the Open Annotation Data Model to represent evidence within a drug-drug interaction knowledge base.” Linked Science 2014 at ISWC 2014.

Tags: , , , , ,
Posted in information ecosystem, scholarly communication, semantic web | Comments (0)

Altmetrics can help surface quality content: Jason Priem on the Decoupled Journal as the achievable future of scholarly communication

November 4th, 2012

Jason Priem has a wonderful slidedeck on how to smoothly transition from today’s practices in scientific communication to the future. Here is my reading of the argument given in Jason’s slides:

Communicating science is a central and essential part of doing science, and we have always used the best technology available.
Yet currently, there are several problems with journals, the primary form of scholarly communication.

Journal publication is

  • Slow
  • Closed
  • Hard to innovate
  • and has

  • Restrictive format: function follows form
  • Inconsistent quality control

These problems are fixable, if we realize that journals serve four traditional functions:

  1. Registration
  2. Archiving
  3. Dissemination
  4. Certification

By decoupling these functions, into an a la carte publishing menu, we can fix the scholarly communication system. Decoupled scholarly outlets already exist. Jason mentions some outlets (I would say these mainly serve registration functions, maybe also dissemination ones):

  • ArXiv
  • Math Overflow
  • SSRN
  • Faculty of 1000 Research
  • the blag-o-sphere

Jason doesn’t mention here — but we could add to this list — systems for data publishing, e-science workflow, and open notebook science; these may fulfil registration and archiving functions. Also, among existing archiving systems, we could add the journal archiving functions of LOCKSS is the main player I’m familiar with.

To help with the certification functions, we have altmetrics tools like Impact Story (Jason’s Sloan Founded project with Heather Piwowar).

Jason’s argument well worth reading in full; it’s a well-articulated argument for decoupling journal functions, with some detailed descriptions of altmetrics. The core argument is very solid, and of wide interest: Unlike previous articulations for “pre-publication peer review”, this argument will make sense to everyone who believes in big data, I think. There are other formats: video of the talk ((Thanks to Siegfriend Handschuh, who suggested the video of Jason giving this talk at Purdue.)) and a draft article called “Decoupling the scholarly journal” ((by Jason Priem and Bradley M. Hemminger, under review for the Frontiers in Computational Neuroscience special issue “Beyond open access: visions for open evaluation of scientific papers by post-publication peer review”)).


Briefly noted in some of my earlier tweets.

Tags: , , ,
Posted in future of publishing, information ecosystem, scholarly communication | Comments (0)

Google Docs ‘research’ tab

May 19th, 2012

Increasingly, I’m using Google Docs with collaborators. Yesterday, one of them pointed out the new “Research” search tab within Google Docs. (Tools->Research). I’m a bit surprised that your searches don’t show up on your collaborators’ screen. I’m particularly surprised that sharing searches doesn’t seem possible.

Google Docs' new 'Research' tab promotes search within Google Docs.

Apparently, it is pretty new. More at the Google Docs blog.

Tags: ,
Posted in information ecosystem, random thoughts, scholarly communication | Comments (0)

Commercial Altmetric Explorer aimed at publishers

May 7th, 2012

Altmetrics is hitting its stride: 30 months after the Altmetrics manifesto ((J. Priem, D. Taraborelli, P. Groth, C. Neylon (2010), Altmetrics: A manifesto, (v.1.0), 26 October 2010. http://altmetrics.org/manifesto)), there are 6 tools listed. This is great news!

I tried out the beta of a new commercial tool, The Altmetric Explorer, from Altmetric.com. They are building on the success and ideas of the academic and non-profit community (but not formally associated with Altmetrics.org). The Altmetric Explorer gives overviews of articles and journals by the social media mentions. You can filter by publisher, journal, subject, source, etc. Altmetric Explore has a closed beta, but you can try the basic functionality on articles with their open tool, the PLoS Impact explorer.

"The default view shows the articles mentioned most frequently in all sources, from all journals. Various filters are available.


Rolling over the donut shows which sources (Twitter, blogs, ...) an article was mentioned in.


Sparklines can be used to compare journals.


A 'people' tab lets you look at individual messages. Rolling over the photo or avatar shows the poster's profile.

Altmetric.com seems largely aimed at publishers ((“Altmetric sustains itself by selling more detailed data and analysis tools to publishers, institutions and academic societies.”, says the bookmarklet page, to explain why that is free)). This may add promotional noise, not unlike coercive citation, if it is used as an evaluation metric as they suggest: ((‘This quote from an editor as a condition for publication highlights the problem: “you cite Leukemia [once in 42 references]. Consequently, we kindly ask you to add references of articles published in Leukemia to your present article”’-from the abstract of Science. 2012 Feb 3;335(6068):542-3. Scientific publications. Coercive citation in academic publishing. Wilhite AW, Fong EA. summary on Science Daily.))

Want to see which journals have improved their profile in social media or with a particular news outlet?

Their API is currently free for non-commercial use. Altmetric.com are crawling Twitter since July 2011 and focusing on papers with PubMed, arXiv, and DOI identifiers. They also get data from Facebook, Google+, and blogs, but they don’t disclose how. (I assume that blogs using ResearchBlogging code are crawled, for instance.)

Tags: , ,
Posted in future of publishing, information ecosystem, random thoughts, scholarly communication, social web | Comments (0)

Code4Lib 2012 talk proposals are out

November 21st, 2011

Code4Lib2012 talk proposals are now on the wiki. This year there are 72 proposals for 20-25 slots. I pulled out the talks mentioning semantics (linked data, semantic web, microdata, RDF) for my own convenience (and maybe yours).

Property Graphs And TinkerPop Applications in Digital Libraries

  • Brian Tingle, California Digital Library

TinkerPop is an open source software development group focusing on technologies in the graph database space.
This talk will provide a general introduction to the TinkerPop Graph Stack and the property graph model is uses. The introduction will include code examples and explanations of the property graph models used by the Social Networks in Archival Context project and show how the historical social graph is exposed as a JSON/REST API implemented by a TinkerPop rexster Kibble that contains the application’s graph theory logic. Other graph database applications possible with TinkerPop such as RDF support, and citation analysis will also be discussed.

HTML5 Microdata and Schema.org

  • Jason Ronallo, North Carolina State University Libraries

When the big search engines announced support for HTML5 microdata and the schema.org vocabularies, the balance of power for semantic markup in HTML shifted.

  • What is microdata?
  • Where does microdata fit with regards to other approaches like RDFa and microformats?
  • Where do libraries stand in the worldview of Schema.org and what can they do about it?
  • How can implementing microdata and schema.org optimize your sites for search engines?
  • What tools are available?

“Linked-Data-Ready” Software for Libraries

  • Jennifer Bowen, University of Rochester River Campus Libraries

Linked data is poised to replace MARC as the basis for the new library bibliographic framework. For libraries to benefit from linked data, they must learn about it, experiment with it, demonstrate its usefulness, and take a leadership role in its deployment.

The eXtensible Catalog Organization (XCO) offers open-source software for libraries that is “linked-data-ready.” XC software prepares MARC and Dublin Core metadata for exposure to the semantic web, incorporating FRBR Group 1 entities and registered vocabularies for RDA elements and roles. This presentation will include a software demonstration, proposed software architecture for creation and management of linked data, a vision for how libraries can migrate from MARC to linked data, and an update on XCO progress toward linked data goals.

Your Catalog in Linked Data

  • Tom Johnson, Oregon State University Libraries

Linked Library Data activity over the last year has seen bibliographic data sets and vocabularies proliferating from traditional library
sources. We’ve reached a point where regular libraries don’t have to go it alone to be on the Semantic Web. There is a quickly growing pool of things we can actually ”link to”, and everyone’s existing data can be immediately enriched by participating.

This is a quick and dirty road to getting your catalog onto the Linked Data web. The talk will take you from start to finish, using Free Software tools to establish a namespace, put up a SPARQL endpoint, make a simple data model, convert MARC records to RDF, and link the results to major existing data sets (skipping conveniently over pesky processing time). A small amount of “why linked data?” content will be covered, but the primary goal is to leave you able to reproduce the process and start linking your catalog into the web of data. Appropriate documentation will be on the web.

NoSQL Bibliographic Records: Implementing a Native FRBR Datastore with Redis

  • Jeremy Nelson, Colorado College, jeremy.nelson@coloradocollege.edu

In October, the Library of Congress issued a news release, “A Bibliographic Framework for the Digital Age” outlining a list of requirements for a New Bibliographic Framework Environment. Responding to this challenge, this talk will demonstrate a Redis (http://redis.io) FRBR datastore proof-of-concept that, with a lightweight python-based interface, can meet these requirements.

Because FRBR is an Entity-Relationship model; it is easily implemented as key-value within the primitive data structures provided by Redis. Redis’ flexibility makes it easy to associate arbitrary metadata and vocabularies, like MARC, METS, VRA or MODS, with FRBR entities and inter-operate with legacy and emerging standards and practices like RDA Vocabularies and LinkedData.

ALL TEH METADATAS! or How we use RDF to keep all of the digital object metadata formats thrown at us.

  • Declan Fleming, University of California, San Diego

What’s the right metadata standard to use for a digital repository? There isn’t just one standard that fits documents, videos, newspapers, audio files, local data, etc. And there is no standard to rule them all. So what do you do? At UC San Diego Libraries, we went down a conceptual level and attempted to hold every piece of metadata and give each holding place some context, hopefully in a common namespace. RDF has proven to be the ideal solution, and allows us to work with MODS, PREMIS, MIX, and just about anything else we’ve tried. It also opens up the potential for data re-use and authority control as other metadata owners start thinking about and expressing their data in the same way. I’ll talk about our workflow which takes metadata from a stew of various sources (CSV dumps, spreadsheet data of varying richness, MARC data, and MODS data), normalizes them into METS by our Metadata Specialists who create an assembly plan, and then ingests them into our digital asset management system. The result is a HTML, RSS, METS, XML, and opens linked data possibilities that we are just starting to explore.

UDFR: Building a Registry using Open-Source Semantic Software

  • Stephen Abrams, Associate Director, UC3, California Digital Library
  • Lisa Dawn Colvin, UDFR Project Manager, California Digital Library

Fundamental to effective long-term preservation analysis, planning, and intervention is the deep understanding of the diverse digital formats used to represent content. The Unified Digital Format Registry project (UDFR, https://bitbucket.org/udfr/main/wiki/Home) will provide an open source platform for an online, semantically-enabled registry of significant format representation information.

We will give an introduction to the UDFR tool and its use within a preservation process.

We will also discuss our experiences of integrating disparate data sources and models into RDF: describing our iterative data modeling process and decisions around integrating vocabularies, data sources and provenance representation.

Finally, we will share how we extended an existing open-source semantic wiki tool, OntoWiki, to create the registry.

saveMLAK: How Librarians, Curators, Archivists and Library Engineers Work Together with Semantic MediaWiki after the Great Earthquake of Japan

  • Yuka Egusa, Senior Researcher of National Institute of Educational Policy Research
  • Makoto Okamoto, Chief Editor of Academic Resource Guide (ARG)

In March 11th 2011, the biggest earthquake and tsunami in the history attacked a large area of northern east region of Japan. A lot of people have worked together to save people in the area. For library community, a wiki named “savelibrary” was launched for sharing information on damages and rescues on the next day of the earthquake. Later then people from museum curators, archivists and community learning centers started similar projects. In April we joined to a project “saveMLAK”, and launched a wiki site using Semantic MediaWiki under http://savemlak.jp/.

As of November 2011, information on over 13,000 cultural organizations are posted on the site by 269 contributors since the launch. The gathered information are organized along with Wiki categories of each type of facilities such library, museum, school, etc. We have held eight edit-a-thons to encourage people to contribute to the wiki.

We will report our activity, how the libraries and museums were damaged and have been recovered with lots of efforts, and how we can do a new style of collaboration with MLAK community, Wiki and other voluntary communities at the crisis.


Conversion by Wikibox, tweaked in Textwrangler. Trimmed email addresses, otherwise these are as-written. Did I miss one? Let me know!

Tags: , , , , , ,
Posted in computer science, library and information science, scholarly communication, semantic web | Comments (0)

Citation management means different things to different people

August 3rd, 2011

I got to talking with a mathematician friend about citation management. We came to the conclusion that “manage PDFs” is my primary goal while “get out good citations” is his primary goal. I thought it would interesting to look at his requirements.

His ideal program would

  1. Organize the PDFs (Papers does this, when it doesn’t botch the author names and the title) preferably in the file system, so I can use Dropbox
  2. Get BibTeX entires from MathSciNet, ACM, etc. EXACTLY AS THEY ARE
  3. Have some decent way to organize notes by “project” or something

He doesn’t care about:

  1. Typing \cite
  2. A “unified” bibliographic database
  3. Social bibliographies (though I am not against them; it is just not a burning issue)

He says:

I guess the point is that, if I am writing something and I know I want to cite it, and I know there is a “official” BibTeX for it, I just need a way to get that more quickly than:

  1. Type the URL
  2. Click on “Proxy this” in my bookmarks bar
  3. Search for the paper
  4. Copy/paste the BibTeX
  5. Edit the cite key to something mnemonic

He followed up with an example of the “awful” awful, lossy markup Papers produces which loses information including the ISSN and DOI; he prefers the minimalist BibTeX. (oops!; he adds “I understated how bad papers is. The real papers entry (top) not only has screwy names, but junk instead of the full journal name. The papers cite key is meaningless noise too (but mathscinet is meaningful noise).”) To get around this, he does the same search/download “a million times”.

AMS Papers2 BibTeX:
@article{AR78,
author = {L Asimow and B Roth},
journal = {Trans. Amer. Math. Soc.},
title = {The rigidity of graphs},
pages = {279--289},
volume = {245},
year = {1978},
}

Papers' The AMS version of the same BibTeX:
@article {AR78,
    AUTHOR = {Asimow, L. and Roth, B.},
     TITLE = {The rigidity of graphs},
   JOURNAL = {Trans. Amer. Math. Soc.},
  FJOURNAL = {Transactions of the American Mathematical Society},
    VOLUME = {245},
      YEAR = {1978},
     PAGES = {279--289},
      ISSN = {0002-9947},
     CODEN = {TAMTAM},
   MRCLASS = {57M15 (05C10 52A40 53B50 73K05)},
  MRNUMBER = {511410 (80i:57004a)},
MRREVIEWER = {G. Laman},
       DOI = {10.2307/1998867},
       URL = {http://dx.doi.org/10.2307/1998867},
}

I’ve just discovered that BibDesk‘s ((See also A short review of BibDesk from MacResearch)) ‘minimize’ does what he wants: its has output is quite close to the AMS Papers2 version:

@article{AR78,
	Author = {Asimow, L. and Roth, B.},
	Journal = {Trans. Amer. Math. Soc.},
	Pages = {279--289},
	Title = {The rigidity of graphs},
	Volume = {245},
	Year = {1978}}

I’d still like to understand the impact the non-minimal BibTeX is having; could be bad citation styles are causing part of the problem.

While we have different needs for citation management, we’re both annoyed by the default filenames many publishers use – like fulltext.pdf and sdarticle.pdf. But I’ll tolerate these, as long as I can get to it from a database index with a nice frontend.

We of course moved on to discussing how research needs an iTunes or, as Geoff Bilder has called it, an iPapers.

This blog post brought to you by Google chat and the number 3.

Tags: , , , , ,
Posted in books and reading, information ecosystem, library and information science, scholarly communication | Comments (0)

Sente, a first look

August 1st, 2011

Today I’ve been testing out Sente, on the theory that it might help me organize the PDFs I’m annotating on my iPad.

The desktop application is geared to Mac users who really care about bibliographies, with several fantastic features, including

I like Sente’s statuses; read/unread and Recently Modified and Recently Added are automatically tracked, and you can rate items. I especially like the workflow statuses, which match some of my common tasks:

  • Get Full Text
  • Discuss Further
  • Cite
  • Do Not Cite

“Sort by citation” is surprisingly illuminating: I didn’t realize how many papers from “Discourse Studies” I’d been looking at recently.

Another great feature that could be easily and fruitfully added to most other bibliographic managers: title case and exact case lists (I am *so* sick of seeing lowercased ‘wikipedia’ in bibliographies!), which you can very easily customize.
Sente also has a journal dictionary: You can assign the abbreviations and ISSNs (authority control, yippee!)!

Their visual display could use an update (thankfully it’s on the way) and I find their icons confusing (maybe ‘pencil’ for ‘note’ is sensible, but what in the world about ‘four dots in a diamond shape’ says ‘abstract’ to you?)

I tested the Zotero import. As I wrote Sente’s developers, there are some issues:

In testing it out on my large (5000+ item) Zotero library I see that:

  1. HTML attachments are not copied into the Sente library
  2. Image attachments are not copied into the Sente library
  3. Text note attachments are not copied into the Sente library
  4. Subcollections are not preserved

Since then, I’ve noticed that the keywords don’t get imported. Further, the date added and “date modified” fields are not preserved, but instead now reflect the import date and time (as I noted on twitter). But I do like their duplicate detection. Along with promising to consolidate matched items, they provide a report about the discarded matches. For instance:

Rule “DOI rule” flagged these two references as possible duplicates:
Vilar, P., & Žumer, M. (2008). Perceptions and importance of user friendliness of IR systems according to users’ individual characteristics and academic discipline. Journal of the American Society for Information Science & Technology, 59(12), 1995-2007. doi:Article
Quick-Response Barcodes. (2008). Library Technology Reports, 44(5), 46-47. doi:Article
However, the match was rejected because the references differ in: Article Title, pages, Publication Title, URL, Volume, Issue.

I have played briefly with the Sente’s free iPad viewer, but not yet with their paid ($19.99) app which allows annotation. Based on reviews (why no permalinks, Apple?), “Export seems to be an option but crucially, import is not.” However, if Sente’s annotation is enough, there’s hope, since documentation of the Sync functionality already in the current (6.2) version the description of Sync for the planned 6.5 release (via this) is *very* promising: “As you read a PDF on your iPad on the bus ride home, highlighting passages and taking notes, the highlighting and notes appear in all copies by the time you arrive home.”

By Sente user standards, I am far from a power user: the biggest databases seem to be about 10 times mine. This could be an improvement from Zotero, where my library speed can’t quite keep up some days. I’d be *very* interested to hear from enthusiastic Sente users. Switching seems quite feasible, and probably worth checking out their iPad app.

The main obvious concerns I have are about notetaking and portability. Notetaking of offline/non-fulltext items is important but doesn’t seem to have been a particular focus of development. Portability is incredibly important: I need to ensure that export (and ideally import) brings along files and notes as well as PDFs.

I’ve been thinking of direct, in-file PDF annotation as the best possible way to ensure that my annotations outlive my reference manager. Should I rethink that? So far (according to their draft manual as above): “Highlighting created in Sente 6.2 is not stored in the PDF itself — it is stored in the library database. This change has several very positive effects, notably on syncing.” Let me know what you think in the comments!

Tags: , , ,
Posted in books and reading, library and information science, reviews, scholarly communication | Comments (2)

Extended deadline for STLR 2011

April 29th, 2011

We’ve extended the STLR 2011 deadline due to several requests; submissions are now due May 8th.

JCDL workshops are split over two half-days, and we are lucky enough to have *two* keynote speakers: Bernhard Haslhofer of the University of Vienna and Cathy Marshall of Microsoft Research.

Consider submitting!

CALL FOR PARTICIPATION
The 1st Workshop on Semantic Web Technologies for Libraries and Readers

STLR 2011

June 16 (PM) & 17 (AM) 2011

http://stlr2011.weebly.com/
Co-located with the ACM/IEEE Joint Conference on Digital Libraries (JCDL) 2011 Ottawa, Canada

While Semantic Web technologies are successfully being applied to library catalogs and digital libraries, the semantic enhancement of books and other electronic media is ripe for further exploration. Connections between envisioned and emerging scholarly objects (which are doubtless social and semantic) and the digital libraries in which these items will be housed, encountered, and explored have yet to be made and implemented. Likewise, mobile reading brings new opportunities for personalized, context-aware interactions between reader and material, enriched by information such as location, time of day and access history.

This full-day workshop, motivated by the idea that reading is mobile, interactive, social, and material, will be focused on semantically enhancing electronic media as well as on the mobile and social aspects of the Semantic Web for electronic media, libraries and their users. It aims to bring together practitioners and developers involved in semantically enhancing electronic media (including documents, books, research objects, multimedia materials and digital libraries) as well as academics researching more formal aspects of the interactions between such resources and their users. We also particularly invite entrepreneurs and developers interested in enhancing electronic media using Semantic Web technologies with a user-centered approach.

We invite the submission of papers, demonstrations and posters which describe implementations or original research that are related (but are not limited) to the following areas of interest:

  • Strategies for semantic publishing (technical, social, and economic)
  • Approaches for consuming semantic representations of digital documents and electronic media
  • Open and shared semantic bookmarks and annotations for mobile and device-independent use
  • User-centered approaches for semantically annotating reading lists and/or library catalogues
  • Applications of Semantic Web technologies for building personal or context-aware media libraries
  • Approaches for interacting with context-aware electronic media (e.g. location-aware storytelling, context-sensitive mobile applications, use of geolocation, personalization, etc.)
  • Applications for media recommendations and filtering using Semantic Web technologies
  • Applications integrating natural language processing with approaches for semantic annotation of reading materials
  • Applications leveraging the interoperability of semantic annotations for aggregation and crowd-sourcing
  • Approaches for discipline-specific or task-specific information sharing and collaboration
  • Social semantic approaches for using, publishing, and filtering scholarly objects and personal electronic media

IMPORTANT DATES

*EXTENDED* Paper submission deadline: May 8th 2011
Acceptance notification: June 1st 2011
Camera-ready version: June 8th 2011

KEYNOTE SPEAKERS

PROGRAM COMMITTEE

Each submission will be independently reviewed by 2-3 program committee members.

ORGANIZING COMMITTEE

  • Alison Callahan, Dept of Biology, Carleton University, Ottawa, Canada
  • Dr. Michel Dumontier, Dept of Biology, Carleton University, Ottawa, Canada
  • Jodi Schneider, DERI, NUI Galway, Ireland
  • Dr. Lars Svensson, German National Library

SUBMISSION INSTRUCTIONS

Please use PDF format for all submissions. Semantically annotated versions of submissions, and submissions in novel digital formats, are encouraged and will be accepted in addition to a PDF version.
All submissions must adhere to the following page limits:
Full length papers: maximum 8 pages
Demonstrations: 2 pages
Posters: 1 page
Use the ACM template for formatting: http://www.acm.org/sigs/pubs/proceed/template.html
Submit using EasyChair: https://www.easychair.org/conferences/?conf=stlr2011

Tags: , , , , , , , , , , , , ,
Posted in future of publishing, library and information science, PhD diary, scholarly communication, semantic web, social semantic web | Comments (2)

Reading styles

March 2nd, 2011

To support reading, think about diversity of reading styles.

A study of “How examiners assess research theses” mentions the diversity:

[F]our examples give a good indication of the range of ‘reading styles’:

  • A (Hum/Male/17) sets aside time to read the thesis. He checks who is in the references to see that the writers are there who should be there. Then he reads slowly, from the beginning like a book, but taking copious notes.
  • B (Sc/Male/22) reads the thesis from cover to cover first without doing anything else. For the first read he is just trying to gain a general impression of what the thesis is about and whether it is a good thesis—that is, are the results worthwhile. He can also tell how much work has actually been done. After the first read he then ‘sits on it’ for a while. During the second reading he starts making notes and reading more critically. If it is an area with which he is not very familiar, he might read some of the references. He marks typographical errors, mistakes in calculations, etc., and makes a list of them. He also checks several of the references just to be sure they have been used appropriately.
  • C (SocSc/Female/27) reads the abstract first and then the introduction and the conclusion, as well as the table of contents to see how the thesis is structured; and she familiarises herself with appendices so that she knows where everything is. Then she starts reading through; generally the literature review, and methodology, in the first weekend, and the findings, analysis and conclusions in the second weekend. The intervening week allows time for ideas to mull over in her mind. On the third weekend she writes the report.
  • D (SocSc/Male/15) reads the thesis from cover to cover without marking it. He then schedules time to mark it, in about three sittings, again working from beginning to end. At this stage he ‘takes it apart’. Then he reads the whole thesis again.

from [cite source=’doi’]10.1080/0307507022000011507[/cite] Mullins, G. & Kiley, M. (2002), It’s a PhD, not a Nobel Prize: how experienced examiners asses research theses, Studies in Higher Education, 27, 4, pp.369-386. DOI:10.1080/0307507022000011507

Parenthetical comments are (discipline/gender/interview number). Thanks to the NUIG Postgrad Research Society for suggesting this paper.

Posted in books and reading, higher education, PhD diary, scholarly communication | Comments (0)