Top 5 Articles from the #LearnxAPI MOOC

As we look back on the MOOC that was #LearnxAPI, we begin to understand which content was most useful. If you haven’t already joined it’s not too late, you can still join the Learn xAPI MOOC.

But, without any further delay, we present the top 5 articles (from the 75+ on offer):

5. xAPI in Action.

I get carried away in this video; at one point my hands appear to act independently of my body. If you can get over that (and apparently most people did) then you’ll get to watch a quick introduction to xAPI for newcomers.

4. Tin Can: It’s not really an API then?

An oldie, but a goodie. This post from Andrew Downes dates back a few years but it still proved of significant interest to the audience. Turns out it’s not an API after all…

3. Five things a web developer needs to know about the xAPI

Delving into the ins-and-outs of xAPI, this article from industry big-wigs working with ADL had folks screaming for more. Turns out that was often because it still went over people’s heads, but with a bit of conversation and explaining the key messages came out.

2. Why do we need xAPI?

Aaron has a way of putting the tech perspective into a practical context. This video proved hugely popular as a primer on why we might bother using xAPI in the first place. And then he jumped onto the MOOC to provide answers to everyone’s questions and comments. Which was awesome.

1. #LearnxAPI Twitter List

And at number one… Not a dramatically over-produced blockbuster video. Not a drag and drop interaction. But a list. A very good list. If you want to learn more about xAPI, these are the people you need to stalk and talk with on Twitter.

Open Badges; issuing with the xAPI

Online learning is becoming more complex. It’s social, gamified, blended, active, synchronous, informal and all the rest of it. Simply getting a certificate for turning up and consuming content seems somewhat inappropriate as a measure of learning.

Bryan Mathers; CC BY-SA 2.0

Bryan Mathers; CC BY-SA 2.0

How you participate in the learning process is worthy of recognition; your input could be vastly different to the next person. We know we can use the xAPI to start tracking all of these different forms of activity. But this data is often too granular to be meaningful on a wider scale. Statements of activity alone are meaningless. What we need is a modern recognition system for learning that can tap into the evidence provided by the xAPI and provide context for all that activity. Enter Open Badges.

Open Badges are an attempt to create just such a recognition system. Conceived as a means for anyone to issue badges of recognition, Open Badges can be used for endorsements of knowledge, skill and character by individuals and communities just as easily as qualification bodies and organisations can use them.

The semantic of a ‘badge’ might not seem immediately suitable for the workplace. However, the word has become the de-facto term in educational circles. Students will enter into the workplace with badges as a result of their education and we will need to make decisions as to how we accept these. It makes sense that we adopt the practice as we look to close the gap between education and employment. We want to inherit badges that are meaningful, authenticated and that fit into a bigger plan. In order to do this, we need to adopt a standard for how we issue and display our badges.

Digital Badges vs Open Badges

Badges are popping up all over the place, often as a by-product of gamification. Khan Academy, Coursera, Code School. You name it, they’ve badged it. However, not all badges are created equal. For badges to retain value, they must be verifiable and portable. Unfortunately that’s not the case with many of the mainstream websites issuing badges. It’s pretty easy to fake them, or claim others badges as your own. We’re starting to see some portability (see: LinkedIn and Coursera for example) but it’s also tricky to know what a learner had to do to earn them, beyond completing ‘some course’. Fortunately, these issues have already been addressed by the Open Badge specification.

Open Badges specify that badges be open, portable and evidence-based. By ‘looking inside’ an Open Badge, we get to see a range of data that tells us who issued the badge, why they issued it and to whom. All of this meta-data is stored alongside the actual badge graphic.

The Anatomy of an Open Badge (from CC BY-SA 2.0; used with permission).

There are three parties in an Open Badge eco-system; The Issuer, the Earner and the Displayer:

  1. The Issuer is the person or organisation who created the badge. They set the criteria (what you must do to earn the badge) and make the ultimate decision as to whether or not that criteria has been met. This decision should be backed up by evidence that the earner has indeed met the criteria, along with the other metadata mentioned in the ‘Badge Anatomy’.
  2. The Earner is the recipient of the Badge and the person to whom the evidence pertains. In the Mozilla Open Badge system the Earner claims the Open Badge from the Issuer and stores it in their Badge Backpack.
  3. The Displayer; a shop-window of all the Earners Badges that can be shared with other people and other systems. Where other systems adopt the Open Badge specification for showing Open Badges, they are termed Displayers.

It’s not always possible to give an Open Badge directly to a participant for them to put in their Backpack. First of all, you are almost mandated right now to use the Mozilla Backpack as part of the specification. This is a 3rd party system, hosted free-of-charge by Mozilla. But it involves the Earner signing up for a new identity (called a Persona) and will take your learners outside of your platform environment. Not ideal from a user experience standpoint. The Mozilla Backpack isn’t without its flaws either – it doesn’t work in older versions of Internet Explorer, for example.

In reality, many Issuers will also be Displayers – it seems natural that having ‘issued’ the badge, you’d give a place for the learner to display it. That Earners have the ability to port the badge is however, crucial. Without this, the value of badge is severely diminished; it’s only useful whilst the learner is still enrolled to that one environment. But this portability also brings with it other challenges. How will you create compelling evidence that backs up the badges value? Will the badge always be valid? Will 3rd parties understand the value of the badge that has been issued?

Building the evidence

Bryan Mathers; CC BY-SA 2.0

Bryan Mathers; CC BY-SA 2.0

Badges are only as valuable as the criteria that informs them, the evidence that describes the achievement and the authority that issues them. And that means issuing badges can become a time-consuming process; it can be difficult to get ‘all the pieces’ in order. Most of the examples in production either showcase a fairly simple ‘complete the course, get the badge’ process, or use a person, typically a tutor, who must ultimately issue the badge. The latter offers corporate learning some reassurance as to the value of badges via proctoring, but it’s not scalable. We’ve got enough admin work already.

The spirit of Open Badges would tell us not to worry too much about cheating; the evidence presented should speak for itself (and should be hard to fake). But again, where we lack automated systems for collecting evidence, we find ourselves with an insurmountable admin problem. We need to see the evidence.

Fortunately, we have already found the answer; the Experience API. The xAPI allows us to collect learning and experience data at a very granular level, understanding exactly what a learner consumed and contributed during a learning experience. Whilst this sounds similar to Open Badges in principle, if you’ve spent much time hanging around xAPI data, you’ll have figured out that xAPI statements in their standalone form are relatively meaningless. But aggregated together, xAPI data can form a compelling record of immutable evidence to be used as the basis for issuing an Open Badge.  This gives us everything we look for in a high quality badge; meaning, evidence and authority.

xAPI and Open Badges

As a community, xAPI specialists and Badge Alliance members are working on the ability to use xAPI data as the evidence for issuing Open Badges.  We’ve started out simple, specifying a recipe by which an Issuer can assert a user earning an Open Badge, via a xAPI statement. You can read more about the recipe at The next step is articulating a standard way in which xAPI statements could be used as a set of criteria, which then leads to the issuing of an Open Badge. This work has already begun in prototypes across the world.

This work will not only bring the workplace learning closer to the Open Badge initiative, but will actively shape some of the discussions on how the Open Badge standard needs to evolve to fit the needs of business. If we can make this work, then a path to evidence-based, scalable credentialing is open for business.

Further Reading:


No Comments

Our next version of Curatr

We’re just about to start beta testing the next version of Curatr, our Social Learning platform. We update the platform throughout the year (this is the 17th release since September 2013), but this update is more significant. With this release, we’ll slay a few sacred cows and introduce a few new features which are the first of their kind to hit the Learning Technology market…

Evolving the User Interface

Curatr’s UI has always been one of our most unique features. Its also been divisive. Our trademark ‘circular’ layout has somewhat come to define the platform. But Curatr is more than any particular layout. Earlier in the year we got our first opportunity to customise the interface thanks to TES Global. This has led to the development of a new way to view content in Curatr that we’re calling the ‘Playlist’ view:


Playlist view

As a course administrator you can now choose between ‘Playlist’ view and ‘Classic’ Curatr. It’s the flick of a switch to change between interfaces. You can still use the ‘Classic’ view, if you prefer, or mix and match between the two. We also introduced a modified navigation bar, with a breadcrumb, to help you find your way around a little easier.

We wanted to give course creators the opportunity to separate out each level into a page of its own, via the ‘Level view’:


Level View

When choosing this option, learners enter the course directly to this page. From here they can visit a particular level of content, be that in the Playlist or ‘Classic’ view.

The levels page gave us some more opportunities, like giving administrators the ability to lock each level by time/date, as well as points, or to not lock the levels at all. Curatr fully unlocked makes for a great social knowledge base or an archive view of a previously run course.

Introducing real-time chat

A much under-used element of Curatr has been the group chat. This allows for threaded discussion outside of a course. But the threaded nature of the discussion didn’t attract a great deal of traction. Inspired by real-time chat programs like Slack (which we use extensively), we wanted to take a new approach to chat in a Social Learning environment.


Real-time chat

Each course now has its own conversation channel (and you can make new conversations outside of courses too) accessed from a tab at the side of the page. These channels can aggregate conversations from many different sources. In the screenshot above you can see the chat following a Twitter hashtag in real-time.  It can integrate comments going on elsewhere within the course. In time we will pull in more social channels, including enterprise clients like Yammer. You can also chat directly in the programme. We’ve got lots more to come with this module (XP for tweets anyone?), but we’ll start off simple and build from there.

A completely new user profile

User profiles in Learning Management Systems are, generally, rubbish. We wanted to do more. Not only did we want our user profiles to really show off a learners capability, but we wanted to start people down the path towards owning their learning data. That’s why our profiles come complete with a timeline, powered by the Learning Locker LRS and xAPI.



This timeline will now supersede the ‘portfolio’ tab within our courses. And with chat taking over from ‘peers’, you’ll generally see fewer buttons in the next version of Curatr. Finally, getting your profile setup takes just a couple of clicks, as we’re now fully integrated with LinkedIn.


LinkedIn integration

On top of all these changes, we’ve overhauled most of the user interface to incorporate these changes. If you want to be amongst the first to get hands-on with our work, simply sign up to our latest MOOC – Exploring Social Learning. Starting Feb 2nd, it will run entirely on our new update. And if you’re coming along to Learning Technologies this week, come visit us on Stand #125, and find out more.


No Comments

5 learning tech buzzwords worth defending

This post first appeared in the October edition of Inside Learning Technologies.

It’s easy being a cynic. I’ve found that the more time I spend in the learning industry, the easier it becomes to dismiss new ideas as fads. A new piece of hardware comes out; it will never takeoff. A new platform emerges; it will never work in my organization. Perhaps it’s time to change our framing. We’re very good at saying what won’t work, but we’re less good at highlighting what might work. Buzzwords have come to represent our world-weariness. Buzzwords are often guilty until proven innocent and that’s a tough stance from which to change the status quo. After all, we’re becoming better educated when it comes to spotting ‘snake-oil’. We understand that no solution is ever a panacea.

In our rush to call-out marketing hype, we’re increasingly dismissive of trends and innovations. As some of these trends settle in, maybe we should stop labelling ‘buzzwords’ as such as a bad thing. We’re so good at rejecting temptation, we run the risk of missing the changes we should be embracing. Here are five big ‘buzzwords’ that don’t deserve their place on the naughty step.

Big Data

The first thing to know about Big Data is that it’s big. I mean really big. Chances are you do not generate anywhere near the amount of data needed to qualify as ‘big’. I was recently at a recruiting event where a Big Data expert was asked why his company’s Big Data platform was yielding insights, whereas a competitor with a similar product had failed a few years previously. “Well”, said the expert, “they only had 4 million records, so it was never going to work”. His data set? 450 million individuals, each with hundreds of data points. Analysing this scale of data takes a whole new stack of hardware and software services that are unlikely to be at the disposal of the learning department. Big data is big. What you’re more likely to deal with is just ‘data’.

That isn’t to say you don’t have a reasonable amount of data at your disposal. With new standards like xAPI and the increasing use of tools like Google Analytics within learning, there has been a surge in the amount of data available to the learning department. But its rare this would qualify as ‘big data’. Big data specialists are happy to set analysts running wild through data sets, looking for correlations and connections that appear as a result of the scale of the data. Where your data set is smaller, these patterns will be less apparent and certainly less reliable (although scale alone doesn’t mean a data set is valid). In these circumstances, you must be very targeted in your analysis of data. You must design for data; understanding the data set you will need to collect in order to answer the hypotheses you set for yourself. Without this rigour, it’s almost inevitable you will fail to gather the data you need to do the analysis.

Not a week goes by that I don’t get asked to build a dashboard for data. Generically, the term represents a page full of graphs and numbers that will impress the boss with stats about learning, performance and other good stuff. The importance of shiny should not be underestimated in gaining friends and influencing people.  Nothing wrong with a dashboard in principle; my Google Analytics dashboard shows me exactly what I want to see in real-time for instance. But it works because the data underpinning it is reliable and fit for purpose. The data sets will grow as more devices and applications start churning out data (see: The Internet of Things). This is a trend that is here to stay.


Gamification has been around the block for the last couple years and has moved out of the ‘innovative’ and towards the ‘tick box’ area of procurement.  Most up-to-date LMS’ play homage to basic game-like features. Most authoring tools have introduced more ‘game-like’ interactions. A lot miss the nuances of a sustainable program of engagement. Very few people seem to understand the behaviourist nature of most gamification. Behaviourist is not a dirty word by the way, it’s just that most basic gamification is geared towards influencing behaviour.

There are increasing numbers of case studies suggesting that gamification can be used in education to some effect. Gamification is a tool; a means to an end. Sometimes it will be the right tool for the job. In these cases, we shouldn’t be turned off using the techniques because we think it’s a buzzword.


Badges have seen something of a renaissance in recent times. Code Academy, Khan Academy, Team Treehouse and a whole slew of other consumer focused learning platforms have embraced badges as a means of informal certification. Seen at times as childish, the wider web has certainly embraced ‘badging’ in all of its forms. We should too. Badges are a trend with some momentum and some purpose; any reliable method of highlighting your abilities and experience that is digital and portable will be a real boost to our industry.

Two problems exist; right now they aren’t that reliable and they aren’t that portable. Many badges are proprietary in nature, you can’t really ‘port’ them anywhere other than the website they were issued on. Mozilla’s Open Badge specification is the leading way to make badges portable. But even then, the Badges can only really be exported to the Mozilla Backpack when implementing the specification. Mozilla has now somewhat stepped back from the initiative, allowing the Badge Alliance to drive things forward. Don’t take this as a sign of project death; every open source project needs to fly the nest in order to truly succeed and Mozilla presumably believe that time is now.

Reliability and the intrinsic value of a badge is a trickier proposition. Many badges lack inherent value. They aren’t hard enough to achieve. We’re often in such a rush to reward people in our gamified solutions that we devalue badges (I’m at fault here as much as anyone!). In order to fulfil the promise, badges must evolve towards holding inherent value. They should be hard to achieve. Where badges can go beyond certification is in providing the evidence for why a badge was issued. Whether an automated system or a named individual issues the badge, by providing a set of criteria (against which the badge was issued) and a set of evidence (proving that the criteria was met) that is forever linked to the badge graphic itself, we can go some distance to proving value and being seen as a reliable measure of ability. Again, Mozilla Open Badges provide the framework. Even though it has its flaws, it is certainly the specification to follow when implementing in your organisation.

xAPI / Tin Can API

The trough of disillusionment looms large on the horizon of the xAPI / Tin Can. The standard is now 18 months old and whilst adoption at face-value has been fast, implementations are still thin on the ground. This isn’t because it’s a bad idea; far from it, it’s a groundbreaking idea that we need to happen. But, like all change, it’s tough. The devil is in the detail. It is a solution in search of a problem. We know the problems exist in a macro sense; SCORM is inappropriate to track experiences in a distributed learning environment. But until enough solution designers understand the opportunity, they won’t solve problems using the methodology. This is a slow growth policy – solution designers won’t understand the opportunity until they are shown some best practices and use cases. And round and round we go…

The name is a pain. Tin Can was the project code name before it had a name. xAPI is the formal term. You can use both interchangeably; Rustici, who did much of the early work as a partner of ADL, have a vested interest in maintaining the Tin Can brand,  and they have made a huge effort in producing content and libraries to support it. The ADL will back it’s own naming convention and will not change for a commercial entity. So here we are. Don’t dismiss this as a fad or as something that has been slow to emerge. Standards generally move at a snails pace. Despite the naming issues, the xAPI has made it’s way onto countless product feature pages. It just needs to be used properly.


Massive Open Online Courses (MOOCs) continue to grow in popularity and the corporate world is increasingly engaging in the conversation.  For some universities, there is a clear enough strategy underpinning what we see. Overseas students are, and continue to be, a significant source of income for UK universities. At worst MOOCs serve as a tool of marketing towards these students, as well as those at home. At best, they foreshadow the university business model of the future; one that is global in nature and constantly pushing for higher standards of content and conversation. Again, here we’re quick to cast aspersions against the pedagogy and quality of MOOCs. But they are competing against the worst form of teaching; dull, hour-long lectures with PowerPoint presentations that are a decade or more old. This is progress. Most short MOOCs have had more money poured into the content creation process than an undergraduate on-campus program ever has. MOOCs are front-page news. What we’ve overlooked is how far we have come. Online learning used to be suspicious; untrustworthy. MOOCs are far from perfect, but they show us that online learning is increasingly accepted as a means of lifelong education.

If you happen to be at Online Educa, Berlin, 3rd – 5th December, stop by the Curatr stand and say hello!


No Comments

What Social Learning can Learn from Open Source Software

Over the last six months I’ve been immersed in the world of Open Source Software. At first glance, giving stuff away would seem easy. You package things up, make them available and the world will come knocking for your product, right?  Wrong! Turns out, giving stuff away is hard work. Really hard work. The level of thought, depth and intention that sits behind every successful open source project is huge. Having reflected on my learning as an open source project owner, I actually think that the development of successful OSS (that’s Open Source Software) is the killer case study of how Social Learning in the connected age is a total revolution for business and learning. Here’s the first part of my thinking…

The Lonely Coder

Programming is, at first glance, a very solitary activity. You ever see the film, The Social Network? There’s this sequence where one of the characters can’t disturb the other because he’s ‘plugged in’. So focused on what he’s doing that he can’t be bothered. This period of time happens for all programmers when they start to commit the code they’ve envisaged in their heads. But before this time, and immediately after it, they’ll be learning from their peers.

I see p2p learning happening everyday in my organization. We’re a small company, 12 people, based around the UK, USA and Canada.  And what we focus on everyday is writing computer code that no one has ever written before. That’s not because we’re hugely innovative (well, we are, but that’s a different story). It’s because computer code requires unique sequences of syntax to be used in almost every occasion. The words we use are the same; the order they appear in is most often different. This means that my guys and girls are solving a problem for the first, and potentially only, time, every time they open their computer.

But it’s a huge burden to try and come up with a unique solution every 5 minutes. And whilst a particular context may be seem unique, it turns out that a very similar context has probably occurred hundreds of times before. Chances are, 0ther programmers have solved your problem for you. And they’ve shared their solutions. This might be in direct response to a question posed on a website like Stack Overflow, or it might be more developed; an open library of code which you can download, tweak and use yourself. Every programmer does it; no one can be entirely original.

Most any programmer will spend more time in Google than she will in code. Anytime there is a new challenge, a problem to be overcome, it is to her peers that the programmer turns. Often its done in a lurking manner; they arrive, take the advice given, and leave. Every now and again the given solution won’t work, or otherwise isn’t optimal. This is where dialogue often starts; when there’s a problem. If it’s relatively easy to take from your peers, then the next most simple step is to raise a problem with something they’ve done.

I came up with this little participation hierarchy; how people interact with each other in a p2p learning environment. It starts with sponging; lurking, call it what you will. The fact is that in most p2p environments, most people are takers…


Next come the problems-generators. The complaints. The errors. This is where you start to generate a two-way dialogue between peers. Of course some people who raise issues will also recommend solutions, and so the relationship grows. But recommending a solution is a far cry from actually doing something about it. That’s a special place where peers actually start producing solutions, ideas or even work on behalf of other people. And where this occurs, or where this is the ultimate goal, then it will take someone to manage that relationship, which I’ll come onto next…


No Comments

Learning Locker wants to store your learning data

Today we announce the version 1 release of Learning Locker, the open source Learning Record Store. Designed to help organisations and individuals store, sort and share learning experience data, the Learning Locker is available as a free download from Github, or as a hosted service for $199 per month.

Using Learning Locker, organisations can store data from a wide range of platforms and applications using the xAPI (or Tin Can) standard.  As xAPI starts to replace SCORM as the key activity tracking standard in E-learning, the Learning Record Store is becoming a ‘must have’ piece of software for education providers and organisations. Early adopters are already using this ability to track learners performance within diverse learning systems, using data from mobile apps, Learning Management Systems, video streaming services, websites and more to analyse performance, appraise students and to customise learning experiences.

Visit the Learning Locker website for details and to sign up for instant access. If you want to inspect LL for yourself before downloading or signing up, you can visit our demo installation to have a play. Over the coming weeks we will be working on our roadmap as we progress to the next stage of development. If you’ve any requests for what you’d like to see next, please do let us know.

No Comments

Learning Locker: The Open Source Learning Record Store

Today we announce the new version of Learning Locker. If you’ve looked at Learning Locker before you will notice that we’ve changed. We haven’t lost sight of our original ideas, we’ve just changed our approach. Learning Locker will become the first enterprise-ready Learning Record Store to be available completely open source. Here’s what we’re thinking…

Turns out, owning your learning data is dull

We’re passionate about promoting individual ownership of learning data. This is the beta product we originally built, a way to capture and present your learning data using xAPI. The idea was, and remains, a cool one. The problem comes in the capture of this data. If you look at really successful personal data solutions, like Nike+, you see how much work they put in to making data capture seamless. Hence the Fuelband. This is a smart solution – slap it on your wrist and all the stats is done for you.

With our original Learning Locker we couldn’t ever quite crack this problem. It was fundamentally too much work to manually capture your data, or to try and re-direct an xAPI endpoint to your own learning locker. Most products aren’t yet built in a way that is compatible with this approach. The payoffs of owning your learning data are long term, so putting in a lot of short term work just wasn’t compelling as an experience. It was a bit dull.

Enter the Learning Record Store

Previously, whilst we adopted the xAPI, we weren’t conforming to the standard for a Learning Record Store. To be honest we didn’t need to do it because we were a consumer facing product that was designed to be a step removed from an LRS. But in our reflection on the personal data problem we came to the conclusion that the most obvious way to automate the process was to also tackle the ownership problem at the organisation level. If we could build a platform that was good for companies to manage their learning data, it would be a lot easier to configure individual user services off the back of that. If a company was doing the leg-work in collecting the data, all the individual would need is a pass-through on the feed. Kinda how you might connect Twitter and WordPress together.

This would mean a turnaround in our strategy; no organisation would adopt an xAPI solution that wasn’t 100% compliant with the standard. We would have to become an LRS. This wasn’t a trivial decision. Writing a fully conformant LRS is a big undertaking.  At this time I only know of two other commercially available LRS platforms; Watershed and Wax. But after a lot of thought, Dave and I decided it was the best way forward.

Learning Locker – for real this time

Which brings us to today. Dave has been plowing through the xAPI specification for the last two months, refactoring our code to bring it up to scratch. We’ve faced a lot of tough decisions along the way. One of those was about focus. We couldn’t bring both the organisation and the individual parts to market at the same time. We chose to concentrate on the organisational elements first, figuring that if we could get data collection going here first, we will have a nice base of statements to push out to individuals when the time is right.

Of course, we had previously committed to open sourcing our work. This is an approach we wanted to keep. As such I’m delighted to announce that Learning Locker will be the first open source (GPL 3.0), enterprise ready Learning Record Store to make it to the marketplace. In fact we’re already in place with 4 pilot organisations.

Now the really hard work starts. Open source isn’t just about giving away code. It’s about fostering a community that can build on our base and support growth. That’s why we’ve set out the building blocks of a governance plan and been around the world recruiting the best and the brightest to come help us out. We’re delighted to have people like Megan Bowe and Aaron Silvers on board to help shape our community approach. And we’ve got great support from the UK in the form of Jason McGonigle and Bryan Mathers. We’ll be adding to our board in the coming weeks, so keep an eye out.

We are completely committed to making learning data work for individuals. This means ownership as well as services. It might take us a little longer to get there but now I’m completely confident we’ll arrive at our destination. In anticipation of us launching Learning Locker please sign up to our mailing list. We’ll use this not only to announce the launch but to keep people up to date with the project on a monthly basis. Check out our first meetup times and if you are in the area please do come and join us.

This is going to be fun.

, , ,

No Comments

4 Ways to use Curation in Learning

I’m getting very excited about the possibilities of using more digital curation in learning.  The trouble with curation is that I’m seeing it everywhere. As such I wanted to come up with a short framework that I could use to talk about how I see curation in learning being used, both at the organisation level and for individuals.  So, go easy on me; here’s what I’m proposing…

We can think of digital curation as being useful to us in four broad roles that I’m calling Inspiration, Aggregation, Integration and Application. Inspiration is how I term curation that is done by other people on your behalf, outside of a formal learning environment.  Aggregation is the same thing, but done within a formal learning context.  Integration is a more personal curation process; how individuals blend new learning experiences with existing thoughts.  And finally Application is how individuals apply new insights in the real world; how we individually manage knowledge on a day-to-day basis.  I capture this flow in a simple matrix that demonstrates how the four types of curation can flow into each other in a continuous learning cycle:

Inspiration, Aggregation, Application, Integration


With the proliferation of content on the Web, it should come as no surprise that we are in increasing need of systems to sort, maintain and re-purpose content in a systematic manner. For a while now we’ve been making do with search as a primary means of sifting through the pile.  But increasingly we are turning to named experts to act as our filter to content.  Where these experts spend time storing, transforming and sharing resources with the world, they are in fact playing the role of the curator on our behalf.  These experts have appeared in every industry. In our own industry content curators are plentiful and many have become well known for their curation efforts.  Chances are that if you’ve attended a conference in the last 3 years, you’ve benefitted from the backchannel curation skills of David ‘LnDDave’ Kelly.  Kelly stores an event’s tweets, blogs and presentations and brings them together on one webpage for easy reference.  By following curators like Kelly we can draw inspiration from a set of content that we know is going to be relevant to our work.  It’s like being hand-delivered the best insights into an industry, straight to your mailbox.

Organizations can of course benefit from this approach.  Here, the role of the digital curator is that of a guardian of resources; someone who stores, transforms and shares within the context of the strategic needs of the company.  Some companies do this internally; curating insights onto social intranet pages or moderating communities of practice for the best thoughts.  Others do it externally, for the benefit of their customers.  Companies like Spiceworks, the IT support company, base their business model around their community, from which they curate the best questions and answers to help promote a collaborative and consistently helpful service.


Increasingly we are being challenged to deliver ‘more with less’ in the learning department.  Curation potentially holds an interesting answer to some of the constraints we’re facing in time and cost. Why build new content, when you can curate?

In the context of a formal learning intervention, organizations can use curation to aggregate content as part of the learning design process.  This can mean using insights gathered from both inside and outside the organization as a baseline of content from which to develop new courses. Sometimes these resources will be re-written and transformed; other times it is enough to use the resources in their original form.  With the increasing quality of online educational content, it is becoming somewhat redundant to always make new material.  You aren’t going to make a ‘better’ TED video than the real deal. It is no longer necessary to create new learning content each time a demand passes down the line.  Blending resources from the outside world with a selection of resources from within the firewall can increase your speed to delivery and cut costs dramatically for the L&D department.

Taking this further, some organizations are beginning to advocate a ‘resources not courses’ strategy.  Here L&D looks beyond providing highly structured courses and towards individual resources.  BP adopts this approach. Lead by Nick Shackleton-Jones, Director of Online and Informal Learning, BP focuses on producing high quality performance support tools, videos and infographics, delivered through simple but effective portal-type websites. They do not develop traditional courses at all; the aggregation and presentation of resources has proven to be far more successful than any previous course ever was.


Curation can and should be used as a tool of teaching and learning itself.. It is not enough for us to simply present content and imagine that it will be so compelling that our audience will instantly change their behavior.  The learning process is a complex one that, especially in experienced learners, requires a process of integration between new and old experiences.  In many ways when we seek to ‘teach’ people, what we are really seeking to achieve is ‘integration’ between old and new experiences.  For most individuals, this will be a process of curation; storing ideas, transforming them to fit with existing experiences and mental models, and, at some point in the future, sharing them through behavior.  Thought of in this light, we can suggest that curation is a key part of the learning process, a key digital literacy that will be required for all current and future knowledge workers.  It is not enough to be told; that’s grade school stuff.  In the current working landscape it is constantly necessary to problem solve and innovate.  That requires critical thought.

Taking this approach, we can seek to produce pedagogical frameworks in our formal learning activities that encourage individuals to cast a critical eye over knowledge, and to be more reflective in their approach to learning. In these circumstances, learners articulate their grasp of a subject area by storing, transforming and sharing their understanding.  If we don’t allow for these processes, we are short-changing learners.  Static, anti-social online learning activities are repeat offenders here; presenting an experienced learner with an online PowerPoint presentation and expecting them to have a meaningful, lasting learning experience simply isn’t going to cut it.  Learners have to be able to curate formal learning to integrate new insights with existing experiences, and to demonstrate back to you, the teacher, how they are going to change.


Moving beyond the classroom and into the world of day-to-day work we can envisage curation as a tool of continuous personal learning.  Here curation helps individuals to capture information that is important to them and to wrap it in a context that gives more meaning than the message alone would impart. Many of us do this in blogs, in tweets and in other collections of knowledge that we share with the world. Increasingly we are seeing the rise of this concept in the form of Personal Knowledge Management (PKM; see Harold Jarche’s website for more information).  According to Jarche individuals seek, sense and share as they seek to explicitly state their understanding of the world.  This process could be seen as the fundamental driver to user generated content – more and more people are willing to share content to inspire others.  This process is more than just bookmarking or collection building.  For many individuals, their curated insights represent a ‘learning locker‘ which  allows for reflection as well as a demonstration of what they know.  It is these individuals that seed the world of content that organizations often seek to curate. And so as we encourage the adoption of PKM tools and techniques, so we see a rise in the overall amount of content available for curation. The cycle begins again.

Summing up

Curation comes in many forms, even within a small niche like learning and development.  We can use it an organizational level to help inspire our employees and our customers, or to help us design and deliver more formal learning experiences using a wide range of content.  We can use curation at a personal level too; to help us develop our understanding in a formal learning process and to help us demonstrate our knowledge and insight from our day-to-day work.  Truth be told, it is early days for curation in our world. Whilst the practices are old, the technologies are often new and as we get to grips with the possibilities that new technologies bring us, it’s easy to see that more opportunities for storing, transforming and sharing resources will become apparent. Curation, as a skill, is on the verge of becoming a key differentiator for employees; knowledge workers could well be expected to bring their curated insights with them to their next job role.  People are making names for themselves as industry experts by the ways in which they curate other people’s work. Telling a story like those I find so interesting at the Natural History Museum is most certainly a skill, but increasingly, it is becoming easier for each of us to become the curator.

Extracts of this article are included in the forthcoming ASTD Handbook (2nd edition); “Curation of Content”.

Want to know more about digital curation? Sign up for our mini-MOOC “How to be an effective digital curator” here.

1 Comment

FutureLearn MOOCs; the future of learning?

This week saw the launch of FutureLearn’s first course, The Power of Brands.  FutureLearn is the UK’s answer to Coursera, EdX and the like.  It is good to finally see the UK coming to the party with MOOCs in a coherent manner, something that has been sorely lacking over the last 12 months.  I’d signed up a while ago, so I was keen to get stuck in…

The experience was very straightforward; nicely presented content, mostly video, some text, completely inoffensive in usability terms.  Discussion points were occasionally raised but more often than not the comment stream alongside a video was left up to the user to interpret.  There’s no voting system on the comments, so it’s easy to imagine that the sheer weight of commentary will become difficult to sift through quickly.

After I marked each piece of content as ‘complete’ I was invited to try the end of week quiz.  A few Multiple Choice Questions of very dubious quality (did Don Draper make an advert for VW?) and I got 12/15.  I hadn’t actually spent more than a minute on any of the content pages, so the rigour will need to increase as the weeks go by if this is ever to be taken as a vehicle for higher learning.

I’m not sure if anything is going on behind the hood in terms of data or personalization, but it doesn’t look like it so far.  What we have here is a nicely presented set of videos, text and multiple-choice questions.  It is not revolutionary, save for the fact that they are giving it away for free. I’ve searched for a legitimate reason for a University to do this beyond marketing and CSR, but honestly, I can’t find one.  The genius has been in persuading some of the UK’s top institutions that they need to give stuff away for free with no promise of returns.

MOOCs in this form aren’t really MOOCs at all in the traditional sense.  This is very much a bastardised way of interpreting the original theory.  It couldn’t be more linear for the learner if it tried.  I don’t blame them; Connectivist MOOCs (as the theory originators term their own) are an exercise in chaos; connecting folk with a common interest around a core idea and exploring it organically over time.  The FutureLearn way of doing things is the antithesis of this idea, an xMOOC.  It is linear, highly structured into sequential weeks (which you can’t skip ahead on) and very much the same as the last 15 years of online learning.  The discourse is an after-thought.  The educators rely on the content to impart learning to passive students.  FutureLearn looks like it knows this; if it set out to deliver highly usable and highly accessible learning, it has.  If it set out to provide a revolution in online teaching, it has not.

If FutureLearn is not a revolution in pedagogic terms, is it revolutionary in other terms? A key feature of cMOOCs for me is the chance to create a learning experience very quickly, relying on the network as oppose to content to impart learning.  But this course has taken FutureLearn a very long time to deliver – nearly a year.  Whilst it is easy to see that much of that time was probably taken up with politics, the cost of delivering this sort of learning must be unsustainable.  This is a missed opportunity.  The future of learning online is not in the creation of perfect content, but in the curation of learning experiences.  Much of the content is only a minute or so in length; this is right. My own experiences suggest that the shorter the material, the better.  But without a change in the pedagogy, this just leaves us with a series of very short videos. How will the course move beyond the facile?

The UK has made horrific missteps before.  As far as I can tell, the key difference now is timing.  There was a lack of a sustainable business model then, much as there is now.

In teaching terms, I want to see much more being made of the discussion areas.  I want to see peer contributed content alongside Subject Matter Experts.  I want to know that data is being collected throughout to help personalize the experience.  I want the platform to be xAPI compliant and I want Badges.

Commercially I probably want to pay for all this.  I’d much rather create a sustainable, affordable system for the long-term than give it all away because Coursera does. I’m not adverse to companies putting content up in this area. I’d much rather learn programming from Google than any university in the UK.  I also want to really see a shift to more practical, even vocational courses.

Of course, I’ve been quick to judge; I’m sure a huge amount of effort has gone in to getting us this far.  I like the fact that the platform is highly useable.  I like the look of some of the courses coming; MOOCs have a real opportunity to fulfill niches that are hard to reach with other methodologies.  I like the fact that some of the course tutors are outstanding individuals.  We needed this in the UK, but it’s the first step on a very tall ladder. Go judge for yourself.

Shameless self promotion: I’ll be speaking at DevLearn next week on the subject of MOOCs with Professor Simon Croom. Come join in the conversation!


1 Comment

The trouble with owning your own learning data


In the next couple of weeks we’ll release our beta version of Learning Locker. With this product we’re going to give people an opportunity to take ownership of the data that is created by them when they interact with systems and people. Previously this data has always been locked away in an organisation’s Learning Management System, or is just lost to the ether. That perhaps wasn’t such a big deal when SCORM was the only standard in town; tracking completions and quiz scores wasn’t invading your privacy in a big way.

But two big changes have collided to change all that; xAPI (Tin Can) and the recognition of less formal learning. The amount of data we create about ourselves and our activities is shooting through the roof. Viewed an article? We’ll track that. Made a comment? We’ve got a record. Liked a post? We like that too!

In the age of Snowden and PRISM, this starts to become somewhat alarming. Not having anything to hide is a naïve argument. Taken out of context, data can be used to construe all sorts of arguments and relationships. What’s more, personal data is a very tradable commodity. When Facebook’s value is tied to its marketing revenues, and its marketing revenues are tied to the quality of the data it holds on users, then it’s not really much of a leap to realise that your data is directly equitable to a dollar value. By giving away your personal data you are literally giving away money to other people. If you value the service you get in return then there is no problem. But we shouldn’t kid ourselves into thinking that Facebook and Twitter are free.

Whilst it is important that people start to realise the value of the data they create, it is even more important that people start to take control of their digital footprint. We all need to take steps towards owning our own data instead of entrusting it to other people and for-profit organisations. I’m very far from being the first person to suggest this and various movements exist in propagate this message. But very few of these movements recognise a simple fact; owning your own data is boring. It is dull as ditchwater. Like watching paint dry.

Dave Tosh and I started thinking about the Learning Locker as a way for learners to take ownership of their learning data. This wasn’t so much powered by a fear of big brother, more from the notion that your data is valuable. Sure, you should be able to trust your data to your organization whilst you work for them but they are going to get careless when you leave. Best case scenario it gets deleted. But then, what was all that effort about? As we make increasing amounts of learning data (be it formal or informal in nature), then you should get to own it. This is your permanent record of working life and it is valuable. What’s more, the holy grail of truly personalised work and learning experiences depends on this record being accessible. Systems that you trust should be able to personalise your interactions based on your previous experiences. That’s not possible if the data isn’t yours.

So data ownership isn’t just desirable, it’s a necessary step for the personalisation of learning. That’s why we built Learning Locker. It didn’t take Dave long, a couple of weeks, to pull together a working prototype. And that’s when we realised. Owning your own data is dull. A never-ending stream of mildly interesting anecdotes and out-of-context strings. Sure you can extrapolate some nice stats and quantified-self type info (turns out I read mostly on a Tuesday, who knew?) but even this stuff isn’t interesting beyond the first few minutes. And that’s a real problem, because if owning your data isn’t a compelling and rewarding experience, it’s going to be very hard to persuade people of its necessity.

We’ve spent the last month thinking and iterating on this idea. I’m a big fan of Self-Determination Theory, so I’ve been thinking in terms of Competence, Autonomy and Relatedness. How can we let you use your data to show your improving skill? How can we do this in a framework that is open enough to provide free-choice, but scaffold so that you actually know how to begin. And how can we let you share your data, on your terms, with other people and systems?

This is what Learning Locker has evolved to become. It will be a long journey to make it a part of people’s everyday lives. But we’re really hoping we can take a small step towards making data ownership an intrinsically valuable experience.

No Comments