Looking out towards an ever widening horizon

September 22, 2011

Being asked to judge a competition is, to be frank, a terrifying experience and one for which I never quite feel up to the task. As the deadline for submitting my marks approaches my anxiety levels climb until I end up wondering whether asking my next-door neighbour to do it for me is a feasible and/or ethical option. And then, finally, I sit down with the judging criteria and the scoring sheet in front of me and everything starts to flow in an orderly and quietly magical way. I look at each entry in turn and give them marks and make supporting comments based on the judging criteria and my personal perspective. For a couple of hours I give my full attention to the competition entries in front of me and don’t let any thoughts about the rest of the world (particularly the rest of the judging panel or the competition entrants themselves) enter my head. Once I’ve scored every entry, I spend a very short amount of time checking that the scores I’ve given still make sense when I take a step back and look at them in relation to each other. Then I submit my scoring sheet and celebrate with a cup of tea. And then I start worrying again but only until I see the announcement of the winning entries and can feel reassured that my personal perspective was not so very far off the perspectives of the other judges.

Anyhow, this blogpost wasn’t meant to be a blow by blow account of the trials and tribulations of being a competition judge – what I really want to do is share some of my thoughts about the competition entries. What struck me straight off was the wide range of use cases, the deep originality demonstrated by the 11 entries and the potential many of them had for being turned into real-world applications with relatively little redevelopment work. In face I gave six of the entries top marks for the ‘What potential does it have’ score because using them instantly sparked my imagination and I could see how the applications either opened up possibilities for other applications or had the potential to bring a new audience to a particular dataset. For instance, one of the winning entrants ‘What’s About’ uses your current location to reveal nearby English Heritage ‘nationally important places’ by visualising them on a Google Map. Straight away I could see that this could be useful for helping individual users discover places of interest on their doorstep that they might not be aware of. With very little further development I could imagine this being useful as ‘virtual fieldtrip’ tool in geography or history classes. Or as a pre- and/or post-school trip tool that allows students to explore the actual site of the visit (via Google Streetview) and read related books via the link to the British Library dataset. Or as a leisure and tourism search tool that could draw tourists to lesser known sites of interest. The list goes on and on.

One of my areas of expertise is usability so I was particularly interested in how easy each entry was to use. In some ways this was not a simple thing to judge because the entries ranged from those aimed squarely at folks with technical expertise, such as Mark van Harmelen’s ‘Command Line Ruby Database-free Processor’ through to applications aimed at end users with no technical or specialist knowledge, such as Alex Parker’s Timeline application, and others which had a foot in both camps, such as Thomas Meehan’s Lodopac. Also, the nature of the entries themselves varied greatly from fully functioning user interfaces through to more conceptual demonstrator style entries. The only way for me to score the entries fairly was to take each application on its own merit. Generally speaking though I gave entries a higher score if they were easy to use on first view and gave lower marks if the applications couldn’t be used without referring to the supporting notes or if it was aimed at a non-technical end user but, in practice, that user would need some technical expertise in order to get to grips with it. I wasn’t worried or adversely influenced by the odd technical glitch as long as it wasn’t obviously linked to a deeper user experience problem.

As a member of the JISC Activity Data programme synthesis team it was gratifying to see the OpenURL router dataset being utilised by four of the entries (one of which was from the team at EDINA who actually originated the data as part of their JISC AD project).

-          Command Line Ruby Database-free Processor

-          OpenURL Router Data Prototype Recommender [EDINA entry]

-           OpenURL.ac.uk Stat Explorer

-          Using Gource to Visualise OpenURL Router Data

Two of the OpenURL entries were also notable for their potential as serendipity/distraction engines. Namely the applications submitted by EDINA and Chris Keene which very quickly lead me to intriguing looking articles within one or two clicks of their respective interfaces. [Can I interest anyone else in Vogel et al’s Acoustic analysis of the effects of sustained wakefulness on speech or maybe in Benatar’s Why it is Better Never to Come Into Existence?]

Looking back at the accompanying notes I submitted here’s a selection of my verbatim comments which highlight a few points of personal interest:

  • Composed impressed me with how tightly Owen Stephens had integrated the display of the MusicNet record within the relevant Copac record. My comment: “[...] the way the bookmarklet returns the results into the blank space on the right of the Copac record seems nothing short of magical. It would be good to explore how it could be expanded to work for non-music records.”
  • Timeline had me doing virtual cartwheels in the comment box. A subset of my comments: “A small amount of additional development would make it endlessly browseable, something I could get lost in for hours. [..]  A really elegant interface which could be used as another route to discovering items of interest in any visual collection [...].” I could also see potential for Alex’s interface to be combined with Yogesh Patel’s Discovery Map to add a geographical element to the interface.
  • Using Gource to Visualise OpenURL Router Data stood out for me because Tony Hirst’s use of visualisations “[...] elevate[s] data to something that is potentially engaging for an audience.” Not only are tools like Gource helping us to, literally, see data in a new way but visualisations present data in a way that, I would argue, increases its shareability to a wider audience [see websites like http://infosthetics.com/ and http://dailyinfographic.com/ for example].

Reflecting on the entries as a whole I was impressed by the quality of lots of the entries’ supporting notes (which made my job as a judge much easier) and greatly inspired by the possibilities that these competition entries open up in their wake. It’s worth mentioning that all the entries are open source (as a central condition of the competition) so I’ll be watching with interest to see what new applications and use cases emerge in response to the competition entries in the forthcoming months and beyond.


Developers entries help us explore new possibilities in discovery

September 15, 2011

It really was a tough call to pinpoint a clear winner for the #discodev competition. After we gave people a bit more time, using some of the August lull to work on applications, we ended up with a really good array of entries, demonstrating a wide range of possibilities. A key judging criterion (obviously) concerns the usability of the application. But judging aside, I am personally less concerned with how usable a rapidly developed application is – and some of these applications have worked very effectively with complex and often dense datasets – but how much they get me thinking about potential use cases and benefits.

To a large degree, the Discovery programme is about identifying the potential, and where appropriate finding ways to build on someone’s seed of an idea. Applications such as Yogesh Patel’s experiment with Archives Hub linked data might only scratch at the surface of the dataset but they still prompt us to think about some of the great potential that exists. Along with What’s About it hints at the potential of combining historic and contemporary geospatial data to provide new routes through to content; to explore the world of ‘exploration’ spatially as opposed through the linear and hierarchical structure of the archival description. I think the archival community especially is hungry for examples to help us get past some of our entrenched thinking about what discovery interfaces looks like. Along with initiatives such as HistoryPin, OCLCs MapFast these applications give us something tangible to react to and explore ideas around discovering library, archival, or museum data geospatially.

We’re also learning more about the potential for Linked Data. The entry from Mathieu D’Aquin, Discobro, compliments the research and development activity of the JISC-funded LOCAH project perfectly in this regard. These are projects that enable the archival community see how EAD rendered as linked data can become more embedded within the wider web of data; and instantly (it seems to me) we’re forced beyond the finding aid and document-centric mindset, and thinking about our descriptions as data that needs to be interlinkable to be found and used. It is remarkable how well Discobro works. My own search for the Stanley Kubrick archives in the Archives Hub using the bookmarklet immediately provided multiple links out to DBpedia entries on Kubrick’s life, cinematography, and films. All this is not achieved through a manual mashing of data, but an automatic ‘meshing’ that can scale (which is perhaps one of the most heady promises of Linked Data).

Will Linked Data be The Way Forward? The jury’s still out, but applications such as Discobro,  and others help us understand in much more tangible terms what benefits might be delivered.

And some applications demonstrated benefits that we can work on delivering much more immediately. For me the stand out here is the Open URL Router Recommender developed by Dimitrios Sferopoulos and Sheila Fraser at EDINA . My brain’s whirring with the possibility of how we can include this as a functionality into article search services at the local or national level (for example, embedding it into the newly designed Zetoc which will be launched later this year). The use case for recommender functions is already proven, although we have more to learn about such functions in academic and teaching contexts, but what EDINA have demonstrated is what you can achieve through the network effect – gathering data centrally. Patterns and relationships between articles emerge that are not readily available through other means. It’s simple, and the data’s already there waiting to be exploited. As a result we can provide routes through to discovery based on communities of use, disciplinary context, and not descriptive metadata alone.

Neeta Patel’s simple visualisation of the MOSAIC circulation data demonstrates something similar – through my involvement with the SALT and Copac Collections Management projects, we know that libraries are already using their circ data (if they collect it) to inform collection management decisions, but that often this work involves scrutinising spreadsheets and figures. Visual views of the data can really help support such analysis, and give that at-a-glimpse overview that can often tell a whole story.

There’s obviously a lot more that could be said about these entries (I wish I could touch on them all) and hopefully we’ll hear some views from my Discovery cohorts.  I’m now interested in seeing what conversations now open up as a result, and what practical work we can carry forward through new collaborations.


Follow

Get every new post delivered to your Inbox.