Archive for the ‘semantic web’ Category

Knowledge Graphs: An Aggregation of Definitions

March 3rd, 2019

I am not aware of a consensus definition of knowledge graph. I’ve been discussing this for awhile with Liliana Giusti Serra, and the topic came up again with my fellow organizers of the knowledge graph session at US2TS as we prepare for a panel.

I’ve proposed the following main features:

  • RDF-compatible, has a defined schema (usually an OWL ontology)
  • items are linked internally
  • may be a private enterprise dataset (e.g. not necessarily openly available for external linking) or publicly available
  • covers one or more domains

Below are some quotes.

I’d be curious to hear of other definitions, especially if you think there’s a consensus definition I’m just not aware of.

“A knowledge graph consists of a set of interconnected typed entities and their attributes.”
Jose Manuel Gomez-Perez, Jeff Z. Pan, Guido Vetere and Honghan Wu. “Enterprise Knowledge Graph: An Introduction.”  In Exploiting Linked Data and Knowledge Graphs in Large Organisations. Springer. Part of the whole book: http://link.springer.com/10.1007/978-3-319-45654-6

“A knowledge graph is a structured dataset that is compatible with the RDF data model and has an (OWL) ontology as its schema. A knowledge graph is not necessarily linked to external knowledge graphs; however, entities in the knowledge graph usually have type information, defined in its ontology, which is useful for providing contextual information about such entities. Knowledge graphs are expected to be reliable, of high quality, of high accessibility and providing end user oriented information services.”

Boris Villazon-Terrazas, Nuria Garcia-Santa, Yuan Ren, Alessandro Faraotti, Honghan Wu, Yuting Zhao, Guido Vetere and Jeff Z. Pan .  “Knowledge graphs: Foundations”. In Exploiting Linked Data and Knowledge Graphs in Large Organisations.  Springer. Part of the whole book: http://link.springer.com/10.1007/978-3-319-45654-6


“The term Knowledge Graph was coined by Google in 2012, referring to their use of semantic knowledge in Web Search (“Things, not strings”), and is recently also used to refer to Semantic Web knowledge bases such as DBpedia or YAGO. From a broader perspective, any graph-based representation of some knowledge could be considered a knowledge graph (this would include any kind of RDF dataset, as well as description logic ontologies). However, there is no common definition about what a knowledge graph is and what it is not. Instead of attempting a formal definition of what a knowledge graph is, we restrict ourselves to a minimum set of characteristics of knowledge graphs, which we use to tell knowledge graphs from other collections of knowledge which we would not consider as knowledge graphs. A knowledge graph

  1. mainly describes real world entities and their interrelations, organized in a graph.

  2. defines possible classes and relations of entities in a schema.

  3. allows for potentially interrelating arbitrary entities with each other.

  4. covers various topical domains.”

Paulheim, H. (2017). Knowledge graph refinement: A survey of approaches and evaluation methods. Semantic web8(3), 489-508.

“ISI’s Center on Knowledge Graphs research group combines artificial intelligence, the semantic web, and database integration techniques to solve complex information integration problems. We leverage general research techniques across information-intensive disciplines, including medical informatics, geospatial data integration and the social Web.”

Just as I was “finalizing” my list to send to colleagues, I found a poster all about definitions:
Ehrlinger, L., & Wöß, W. (2016). Towards a Definition of Knowledge Graphs. SEMANTiCS (Posters, Demos, SuCCESS)48http://ceur-ws.org/Vol-1695/paper4.pdf
Its Table 1: Selected definitions of knowledge graph has the following definitions (for citations see that paper)

“A knowledge graph (i) mainly describes real world entities and their interrelations, organized in a graph, (ii) defines possible classes and relations of entities in a schema, (iii) allows for potentially interrelating arbitrary entities with each other and (iv) covers various topical domains.” Paulheim [16]

“Knowledge graphs are large networks of entities, their semantic types, properties, and relationships between entities.” Journal of Web Semantics [12]

“Knowledge graphs could be envisaged as a network of all kind things which are relevant to a specific domain or to an organization. They are not limited to abstract concepts and relations but can also contain instances of things like documents and datasets.” Semantic Web Company [3]

“We define a Knowledge Graph as an RDF graph. An RDF graph consists of a set of RDF triples where each RDF triple (s, p, o) is an ordered set of the following RDF terms: a subjects∈U∪B,apredicatep∈U,andanobjectU∪B∪L. AnRDFtermiseithera URI u ∈ U, a blank node b ∈ B, or a literal l ∈ L.” Färber et al. [7]

“[…] systems exist, […], which use a variety of techniques to extract new knowledge, in the form of facts, from the web. These facts are interrelated, and hence, recently this extracted knowledge has been referred to as a knowledge graph.” Pujara et al. [17]


“A knowledge graph is a graph that models semantic knowledge, where each node is a real-world concept, and each edge represents a relationship between two concepts”

Fang, Y., Kuan, K., Lin, J., Tan, C., & Chandrasekhar, V. (2017). Object detection meets knowledge graphs.
https://oar.a-star.edu.sg/jspui/handle/123456789/2147


“things not strings” – Google

Tags: , ,
Posted in information ecosystem, semantic web | Comments (0)

Linked Science 2014 paper: Using the Micropublications ontology and the Open Annotation Data Model to represent evidence within a drug-drug interaction knowledge base

October 19th, 2014

Today I’m presenting a talk in the ISWC 2014 Workshop on Linked Science 2014—Making Sense Out of Data (LISC2014). The LISC2014 paper is joint work with Paolo Ciccarese, Tim Clark and Richard D. Boyce. Our goal is to make the evidence in a scientific knowledge base easier to access and audit — to make the knowledge base easier to maintain as scientific knowledge and drug safety regulations change. We are modeling evidence (data, methods, materials) from biomedical communications in the medication safety domain (drug-drug interactions).

The new architecture for the drug-drug interaction knowledge base is based on:

This is part of a 4-year National Library of Medicine project, “Addressing gaps in clinically useful evidence on drug-drug interactions” (1R01LM011838-01)

Abstract of our paper, “Using the Micropublications ontology and the Open Annotation Data Model to represent evidence within a drug-drug interaction knowledge base.”:

Semantic web technologies can support the rapid and transparent validation of scientific claims by interconnecting the assumptions and evidence used to support or challenge assertions. One important application domain is medication safety, where more efficient acquisition, representation, and synthesis of evidence about potential drug-drug interactions is needed. Exposure to potential drug-drug interactions (PDDIs), defined as two or more drugs for which an interaction is known to be possible, is a significant source of preventable drug-related harm. The combination of poor quality evidence on PDDIs, and a general lack of PDDI knowledge by prescribers, results in many thousands of preventable medication errors each year. While many sources of PDDI evidence exist to help improve prescriber knowledge, they are not concordant in their coverage, accuracy, and agreement. The goal of this project is to research and develop core components of a new model that supports more efficient acquisition, representation, and synthesis of evidence about potential drug-drug interactions. Two Semantic Web models—the Micropublications Ontology and the Open Annotation Data Model—have great potential to provide linkages from PDDI assertions to their supporting evidence: statements in source documents that mention data, materials, and methods. In this paper, we describe the context and goals of our work, propose competency questions for a dynamic PDDI evidence base, outline our new knowledge representation model for PDDIs, and discuss the challenges and potential of our approach.

Citation: Schneider, Jodi, Paolo Ciccarese, Tim Clark, and Richard D. Boyce. “Using the Micropublications ontology and the Open Annotation Data Model to represent evidence within a drug-drug interaction knowledge base.” Linked Science 2014 at ISWC 2014.

Tags: , , , , ,
Posted in information ecosystem, scholarly communication, semantic web | Comments (0)

Library Linked Data at ALA 2014

June 6th, 2014

Linked Data is big at the 2014 American Library Association meeting! All day Friday & Saturday, plus Sunday morning, you can get your recommended dose of Library Linked Data. See you in Las Vegas?

Friday June 27
I’ll be speaking and moderating a question session in this full-day preconference.
Practical Linked Data with Open Source (separate ticket needed)
Friday, June 27, 2014 – 8:30am to 4:00pm
N258, Las Vegas Convention Center
This pre-conference combines theory and practice by giving participants a working knowledge of the creation and use of linked data and linked data applications. This session will ground participants in linked data models and patterns through hands-on exercises. Participants will go home with a working knowledge of the state of the art of linked data in open source library systems and the use of linked data to solve metadata problems across libraries, archives, and museums

Saturday June 28
I will be speaking about international developments in LLD in Part I:
International Developments in Library Linked Data: Think Globally, Act Globally (Part One)
Saturday, June 28, 2014 – 8:30am to 10:00am
N264, Las Vegas Convention Center

International Developments in Library Linked Data: Think Globally, Act Globally – Part Two
Saturday, June 28, 2014 – 10:30am to 11:30am
S230, Las Vegas Convention Center
Libraries have the potential to make major contributions to the Semantic Web, but are still emerging as global participants. RDA implementation and the BibFrame initiative have drawn fresh attention to the promise and potential of linked data. What are the international developments in linked data, emerging from libraries and other memory institutions? Come hear our speakers address current projects, opportunities and challenges.

Taking action: Linked data for digital collection managers
Saturday, June 28, 2014 – 1:00pm to 2:30pm
S222, Las Vegas Convention Center

The linked data movement has gained momentum. But how does paradigm shift affect digital collection workflows? This workshop will provide key theoretical concepts of linked data and engaging hands-on activities demonstrating how CONTENTdm metadata can be transformed into linked data. The workshop will also provide a forum to discuss how linked data might alter our current practices and workflows. This workshop is geared toward beginners and is designed for curious exploration and active learning.

OCLC The Power of Shared Data: What’s New and What’s Next?
Saturday, June 28, 2014 – 3:00pm to 4:00pm
N116, Las Vegas Convention Center
Join OCLC’s Ted Fons and Richard Wallis to understand how OCLC is leveraging your WorldCat holdings to give your institution broader visibility on the Web. In this session, we will detail current features, planned enhancements and new developments related to linked data.

Sunday June 29
Linked Library Data Interest Group
Sunday, June 29, 2014 – 8:30am to 10:00am
N237, Las Vegas Convention Center
Talk by Jon Phipps & discussion to follow. (Sunday, sadly, I’m on a plane to another meeting.)

Jon Phipps, of Metadata Management, will present a talk on:

RDA and LOD — FTW or WTF? : A Fair and Balanced Point of View.

Is RDA just “the rules” or is it a robust bibliographic metadata model designed specifically to support rich, FRBRized, distributed LOD that just happens to come with several thousand “pages” of rules? What’s this “unconstrained” stuff? Why does RDA RDF have URIs I can’t “read” and will never remember (and what are lexical aliases)? Why are there so many definitions for “Work” anyway? How is RDA handling versioning and releases? How is RDA using Git and GitHub? Why does any of this matter to my data and, more importantly, me?

You’ve got questions? Maybe Jon Phipps has some answers (except for that last one). Jon is a partner in Metadata Management Associates, a consultancy specializing in, wait for it … metadata management, and has been collaborating with various groups of well-intentioned folks trying to define RDA as a data model for what seems like centuries, and thinks that quite recently the JSC has pretty much nailed it.

A question and answer period and a lively managed discussion will follow the presentation. More info & speaker biography.

Understanding Schema.org
Sunday, June 29, 2014 – 10:30am to 11:30am
S230, Las Vegas Convention Center
Jason Clark and Dan Scott

Schema.org is an effort among major search engines to promote better linking of Web content through the use of metadata attributes in HTML markup, allowing for improved access to digital objects. The ALCTS/LITA Metadata Standards Committee invites you to hear speakers who are active in schema.org development in libraries, and who will discuss initiatives in this area within the GLAM community which promote a broader understanding of the development of bibliographic information among these communities.


Kudos to the LITA / ALCTS Linked Library Data Interest Group and ALCTS/LITA Metadata Standards Committee for facilitating a great program!

Above information from the American Library Association and its Linked Library Data Interest Group (updated June 17): double-check room numbers at the conference website, and add sessions to your conference scheduler.

Tags: , , ,
Posted in library and information science, semantic web | Comments (0)

Ontology Evaluation – an Essential Part of Ontology Engineering

July 26th, 2012

James Malone reflects on a panel discussion on evaluation and reuse of ontologies. He wants there to be a “a formal, objective and quantifiable process” for “making public judgements on ontologies”. Towards that, he suggests that we need:

  1. A formal set of engineering principles for systematic, disciplined, quantifiable approach to the design, development, operation, and maintenance of ontologies
  2. The use of test driven development, in particular using sets of (if appropriate, user collected) competency questions which an ontology guarantees to answer, with examples of those answers – think of this as similar to unit testing
  3. Cost benefit analysis for adopting frameworks such as upper ontologies, this includes aspects such as cost of training for use in development, cost to end users in understanding ontologies built using such frameworks, cost benefits measured as per metrics such as those above (e.g. answering competency questions) and risk of adoption (such as significant changes or longer term support).

– James Malone, in Why choosing ontologies should not be like choosing Pepsi or Coke, about his International Conference on Biomedical Ontology panel ‘How to deal with sectarianism in biomedical ontology.

Tags: , , ,
Posted in semantic web | Comments (0)

Karen Coyle on Library Linked Data: let’s create data not records

January 12th, 2012

There have been some interesting posts on BIBFRAME recently (noted a few of them).

Karen Coyle also pointed to her recent blog post on transforming bibliographic data into RDF. As she says, for a real library linked data environment,

we need to be creating data, not records, and that we need to create the data first, then build records with it for those applications where records are needed.

Tags: , , , , ,
Posted in information ecosystem, library and information science, semantic web | Comments (1)

A Review of Argumentation for the Social Semantic Web

December 6th, 2011

I’m very pleased to share our “A Review of Argumentation for the Social Semantic Web“.

You are very warmly invited to review this paper. You can post the review as a comment to the manuscript page publicly at SWJ’s website. Informal comments by email are also welcome.

Open review

I adore SWJ’s open review process: publicly available manuscripts are useful. In 11 months the landing page has had “1208 reads” and I’m sure that not all of those are mine! Further, knowing who reviewed a paper can add credibility to the process. (It means quite a lot to me when Simon Buckingham-Shum says “I anticipate that this will become a standard reference for the field.”!)

Two earlier versions

The paper evolved from my first year Ph.D. report. In the process of defining my Ph.D. topic, I reviewed the state-of-art of argumentation for the Social Semantic Web. This was further developed in conversations with my coauthors, my colleague Tudor Groza and my advisor Alexandre Passant.

The outdated first journal submission and second journal submission are available; May’s reviews refer to the first version. A cover letter responding to the reviews summarizes what has changed. Shared since I am always encouraged by seeing how others’ work and ideas have developed over time!

So read the most recent version, and let us know what you think!

Updated 2012-08-09 to update links to the “final” version.

Tags: , , , ,
Posted in argumentative discussions, PhD diary, semantic web, social semantic web, social web | Comments (0)

Code4Lib 2012 talk proposals are out

November 21st, 2011

Code4Lib2012 talk proposals are now on the wiki. This year there are 72 proposals for 20-25 slots. I pulled out the talks mentioning semantics (linked data, semantic web, microdata, RDF) for my own convenience (and maybe yours).

Property Graphs And TinkerPop Applications in Digital Libraries

  • Brian Tingle, California Digital Library

TinkerPop is an open source software development group focusing on technologies in the graph database space.
This talk will provide a general introduction to the TinkerPop Graph Stack and the property graph model is uses. The introduction will include code examples and explanations of the property graph models used by the Social Networks in Archival Context project and show how the historical social graph is exposed as a JSON/REST API implemented by a TinkerPop rexster Kibble that contains the application’s graph theory logic. Other graph database applications possible with TinkerPop such as RDF support, and citation analysis will also be discussed.

HTML5 Microdata and Schema.org

  • Jason Ronallo, North Carolina State University Libraries

When the big search engines announced support for HTML5 microdata and the schema.org vocabularies, the balance of power for semantic markup in HTML shifted.

  • What is microdata?
  • Where does microdata fit with regards to other approaches like RDFa and microformats?
  • Where do libraries stand in the worldview of Schema.org and what can they do about it?
  • How can implementing microdata and schema.org optimize your sites for search engines?
  • What tools are available?

“Linked-Data-Ready” Software for Libraries

  • Jennifer Bowen, University of Rochester River Campus Libraries

Linked data is poised to replace MARC as the basis for the new library bibliographic framework. For libraries to benefit from linked data, they must learn about it, experiment with it, demonstrate its usefulness, and take a leadership role in its deployment.

The eXtensible Catalog Organization (XCO) offers open-source software for libraries that is “linked-data-ready.” XC software prepares MARC and Dublin Core metadata for exposure to the semantic web, incorporating FRBR Group 1 entities and registered vocabularies for RDA elements and roles. This presentation will include a software demonstration, proposed software architecture for creation and management of linked data, a vision for how libraries can migrate from MARC to linked data, and an update on XCO progress toward linked data goals.

Your Catalog in Linked Data

  • Tom Johnson, Oregon State University Libraries

Linked Library Data activity over the last year has seen bibliographic data sets and vocabularies proliferating from traditional library
sources. We’ve reached a point where regular libraries don’t have to go it alone to be on the Semantic Web. There is a quickly growing pool of things we can actually ”link to”, and everyone’s existing data can be immediately enriched by participating.

This is a quick and dirty road to getting your catalog onto the Linked Data web. The talk will take you from start to finish, using Free Software tools to establish a namespace, put up a SPARQL endpoint, make a simple data model, convert MARC records to RDF, and link the results to major existing data sets (skipping conveniently over pesky processing time). A small amount of “why linked data?” content will be covered, but the primary goal is to leave you able to reproduce the process and start linking your catalog into the web of data. Appropriate documentation will be on the web.

NoSQL Bibliographic Records: Implementing a Native FRBR Datastore with Redis

  • Jeremy Nelson, Colorado College, jeremy.nelson@coloradocollege.edu

In October, the Library of Congress issued a news release, “A Bibliographic Framework for the Digital Age” outlining a list of requirements for a New Bibliographic Framework Environment. Responding to this challenge, this talk will demonstrate a Redis (http://redis.io) FRBR datastore proof-of-concept that, with a lightweight python-based interface, can meet these requirements.

Because FRBR is an Entity-Relationship model; it is easily implemented as key-value within the primitive data structures provided by Redis. Redis’ flexibility makes it easy to associate arbitrary metadata and vocabularies, like MARC, METS, VRA or MODS, with FRBR entities and inter-operate with legacy and emerging standards and practices like RDA Vocabularies and LinkedData.

ALL TEH METADATAS! or How we use RDF to keep all of the digital object metadata formats thrown at us.

  • Declan Fleming, University of California, San Diego

What’s the right metadata standard to use for a digital repository? There isn’t just one standard that fits documents, videos, newspapers, audio files, local data, etc. And there is no standard to rule them all. So what do you do? At UC San Diego Libraries, we went down a conceptual level and attempted to hold every piece of metadata and give each holding place some context, hopefully in a common namespace. RDF has proven to be the ideal solution, and allows us to work with MODS, PREMIS, MIX, and just about anything else we’ve tried. It also opens up the potential for data re-use and authority control as other metadata owners start thinking about and expressing their data in the same way. I’ll talk about our workflow which takes metadata from a stew of various sources (CSV dumps, spreadsheet data of varying richness, MARC data, and MODS data), normalizes them into METS by our Metadata Specialists who create an assembly plan, and then ingests them into our digital asset management system. The result is a HTML, RSS, METS, XML, and opens linked data possibilities that we are just starting to explore.

UDFR: Building a Registry using Open-Source Semantic Software

  • Stephen Abrams, Associate Director, UC3, California Digital Library
  • Lisa Dawn Colvin, UDFR Project Manager, California Digital Library

Fundamental to effective long-term preservation analysis, planning, and intervention is the deep understanding of the diverse digital formats used to represent content. The Unified Digital Format Registry project (UDFR, https://bitbucket.org/udfr/main/wiki/Home) will provide an open source platform for an online, semantically-enabled registry of significant format representation information.

We will give an introduction to the UDFR tool and its use within a preservation process.

We will also discuss our experiences of integrating disparate data sources and models into RDF: describing our iterative data modeling process and decisions around integrating vocabularies, data sources and provenance representation.

Finally, we will share how we extended an existing open-source semantic wiki tool, OntoWiki, to create the registry.

saveMLAK: How Librarians, Curators, Archivists and Library Engineers Work Together with Semantic MediaWiki after the Great Earthquake of Japan

  • Yuka Egusa, Senior Researcher of National Institute of Educational Policy Research
  • Makoto Okamoto, Chief Editor of Academic Resource Guide (ARG)

In March 11th 2011, the biggest earthquake and tsunami in the history attacked a large area of northern east region of Japan. A lot of people have worked together to save people in the area. For library community, a wiki named “savelibrary” was launched for sharing information on damages and rescues on the next day of the earthquake. Later then people from museum curators, archivists and community learning centers started similar projects. In April we joined to a project “saveMLAK”, and launched a wiki site using Semantic MediaWiki under http://savemlak.jp/.

As of November 2011, information on over 13,000 cultural organizations are posted on the site by 269 contributors since the launch. The gathered information are organized along with Wiki categories of each type of facilities such library, museum, school, etc. We have held eight edit-a-thons to encourage people to contribute to the wiki.

We will report our activity, how the libraries and museums were damaged and have been recovered with lots of efforts, and how we can do a new style of collaboration with MLAK community, Wiki and other voluntary communities at the crisis.


Conversion by Wikibox, tweaked in Textwrangler. Trimmed email addresses, otherwise these are as-written. Did I miss one? Let me know!

Tags: , , , , , ,
Posted in computer science, library and information science, scholarly communication, semantic web | Comments (0)

Web of data for books?

November 5th, 2011

If you were building a user interface for the Web of data, for books, it just might look like Small Demons.

Unfortunately you can’t see much without logging in, so go get yourself a beta account. (I’ve already complained about asking for a birthday. My new one is 29 Feb 1904, you can help me celebrate in 2012!)

Their data on Ireland is pretty sketchy so far. They do offer to help you buy Guiness on Amazon though. :)

Tags: ,
Posted in books and reading, library and information science, semantic web, social semantic web | Comments (0)

Frank van Harmelen’s laws of information

November 1st, 2011

What are the laws of information? Frank van Harmelen proposes seven laws of information science in his keynote to the Semantic Web community at ISWC2011. ((He presents them as “computer science laws” underlying the Semantic Web; yet they are laws about knowledge. This makes them candidate laws of information science, in my terminology.))

  1. Factual knowledge is a graph. ((“The vast majority of our factual knowledge consists of simple relationships between things,
    represented as an ground instance of a binary predicate.
    And lots of these relations between things together form a giant graph.”))
  2. Terminological knowledge is a hierarchy.
  3. Terminological knowledge is much smaller ((by 1-2 orders of magnitude)) than the factual knowledge.
  4. Terminological knowledge is of low complexity. ((This is seen in “the unreasonable effectiveness of low-expressive KR”: “the information universe is apparently structured in such a way that the double exponential worse case complexity bounds don’t hit us in practice.”))
  5. Heterogeneity is unavoidable. ((But heterogeneity is solvable through mostly social, cultural, and economic means (algorithms contribute a little bit). ))
  6. Publication should be distributed, computation should be centralized to decrease speed: “The Web is not a database, and I don’t think it ever will be.”
  7. Knowledge is layered.
What do you think? If they are laws, can they be proven/disproven?

Semantic Web vocabularies in the Tower of Babel

I wish every presentation came with this sort of summary: slides and transcript, presented in a linear fashion. But these laws deserve more attention and discussion–especially from information scientists. So I needed something even punchier to share, (prioritized thanks to Karen).

Tags: , , ,
Posted in computer science, information ecosystem, library and information science, PhD diary, semantic web | Comments (0)

Quantified Self Europe, two talks proposed

October 12th, 2011

Thanksgiving weekend doesn’t really register in Europe. But this year it will for me: I’m going to Amsterdam for Quantified Self Europe, since I’m lucky enough to have a scholarship covering conference fees.

Today I proposed two talks:

  1. Weight and exercise tracking (which I’ve been doing in various forms for 19 months, currently using a Phillips DirectLife exercise monitor, and a normal scale, collected with the hacker’s diet). Mainly, these are less integrated than they could be, and I’d like to advocate interoperability, APIs, and uniform formats — while hopefully getting some ideas from the audience about quick hacks to improve my current system.
  2. Lifetracking, privacy & the surveillance society. This brings together two themes: First, how individuals’ lifetracking can be seen as a re-enactment of privacy, with changed ideas of what that means (e.g. panopticon, sousveillance, etc.). Second, the increased awareness about the wealth of personal data held by corporations (e.g. German politician Malt Spitz sued to get 6 months of his telcom data). The boundary between public life and private life is continually shifting as communication technology and social norms evolve; this talk investigates how lifetracking and the quantified self movement push the privacy/publicity boundaries in multiple ways. QS increases the public audience for data-driven stories of private lives while also highlighting the need for individuals to control access to and the disposition of their own personal data.

Ironically, self-surveillance was an academic interest of mine before it became a personal one:  Back in 2009, Nathan Yau and I wrote a paper for the ASIST Bulletin about self-surveillance (PDF) [less pretty in HTML]. It helped interest me in the Semantic Web, too: putting data in standard formats would make it easier to make data-driven visualizations, so lifetracking and the quantified self movement is a great usecase for the (social) Semantic Web. QS also shows how privacy cuts both ways and could provide an early-adopter audience for the kind of fine-grained privacy tools a colleague is developing.

(A first reply to Nic’s encouragement)

Tags: , , , , , , ,
Posted in semantic web, social semantic web | Comments (0)