Understanding Tin Can API

Tin Can has been getting lots of people in a twist lately. Early adopters are tweeting and blogging about it and anyone who’s anyone seems to be dropping it into conversations to prove they’re at the cutting edge of learning technologies. It is certainly doing the rounds as the Next Big Thing. But ask anyone, “What is Tin Can? Explain it to me” and more often than not you’ll just get a shrug of the shoulders and a quizzical look.

This is because Tin Can API is pretty confusing to the newcomer. I can vouch for that because I am a newcomer to it myself. This blog post is my collected notes and thoughts from one day spent learning about Tin Can API. It’s become pretty clear over recent weeks to me that most people understand that Tin Can API is the next version of the SCORM standard, but few people realise that it is still only at the DRAFT stage. There is a high level of vendor ‘early adopter’ activity with technology companies implementing the draft standard but there is also a high level of vendor hype with people like Articulate touting Articulate Online as a “Tin Can API-supported learning management system”. The level of hype makes it sound more real than it is, and the race to innovate seems to have taken the standards definition squad by surprise. While these people are still working on the final revisions to the draft specification, the hype in the vendor market is leading e-learning practitioners to eagerly search out press releases and marketing material that just aren’t ready yet.

So what exactly IS Tin Can API? 

We have to start with SCORM. The SCORM standard is all about tracking the status of big and chunky e-learning modules, with the e-learning module and the learner record usually residing in a single LMS or Learning Management System. Tin Can API however, rightly recognises that most learning happens away from the LMS. So the focus has moved away from e-learning modules towards learning activities, be these offline or online, tutor-led or collaborative, real-world or virtual. It doesn’t matter where the activity takes place; what matters is that some remote system with knowledge of that activity can send a simple statement to a central learner record store (LRS) containing some very basic details of what the learner did. That’s why it’s called Tin Can API - the API stands for Application Programming Interface. API’s are used everywhere in IT – they handily provide a common language to allow unrelated systems to talk to each other. For example, a library system could use Tin Can API to send a statement to an LRS to say that a learner borrowed a book or a journal. The statement itself is a very simple one in the form of ‘noun – verb – object‘, for example ‘Mark borrowed Book X’ – it’s as simple as that.

So what can Tin Can API be used for?

Think about the possibilities. You could create Tin Can API services for an almost endless list of tools such as Twitter, Facebook, Google+, event management systems, learning management systems, Slideshare, YouTube, Yammer, library management systems, blogs, social bookmarking tools, all sorts of learning and productivity tools. These would all send small activity records to the LRS such as:

  • I watched/uploaded/commented Video A on YouTube
  • I borrowed Book B from the library
  • I attended Conference C
  • I posted Status D to Facebook
  • I tweeted Tweet E to Twitter
  • I scored 50% in an online quiz
  • I completed e-learning F in Moodle
  • I bookmarked Website G on Diigo
  • I joined Group H on Yammer
  • And so on…

There will be plug-ins and apps galore to help learners record their learning – Rustici have already released an ‘I learned this’ browser bookmarklet and a book barcode scanner that both send Tin Can API statements to an LRS. Well, THEIR commercial LRS. But where they have started, many others will follow, hopefully allowing learners and IT administrators to point the statements towards an LRS of their choosing.

It sounds great, so who is behind Tin Can API?

In simple terms, the US Department of Defense is behind Tin Can API. They run the Advanced Distributed Learning (ADL) initiative, which aims to “standardize and modernize training and education management and delivery”. As guardians of the SCORM standard, ADL have had a huge impact on the e-learning industry, far beyond the confines of the DoD. ADL’s current focus is on personalised, just-in-time learning and part of their vision includes “greater communication between systems and content types and tracking learner activity including non-linear learning experiences and social media interactions.” To this end, the current SCORM standard simply doesn’t cut the mustard any more.

Enter ADL’s new initiative: ‘Training and Learning Architecture‘. This consists of a number of projects including the Experience API (Tin Can API), Project Tin Can and the Learner Record Store (LRS). Forget about Project Tin Can, it is superceded by the Experience API (Tin Can API) project. Those other two projects we already know about. Bizarrely, the API project is very confusingly referred to by two names – Experience API and Tin Can API – heaven knows why but let’s just say no technology standards body has ever got an award for its marketing prowess.

So ADL are behind the draft standard, but they didn’t write it. They actually went to the market and offered funding to define the new standard. To do this they issued a BAA, which is basically an invite to tender for the work. That funding was won by a company called Rustici, who spent a year writing the draft specification. This was then delivered to ADL who put it out to the open community for review and revision. And that’s where it is now, at version 0.95.

Rustici remain closely involved in the process and are providing support to early adopters, particularly authoring tool and LMS vendors who are implementing the draft specification. I spoke to them last week myself and they were exceedingly helpful. They are also providing their own commercial LRS product and TinCan API plug-ins and are therefore investing in marketing effort around these, which has certainly contributed to the buzz and the perception among my colleagues that TinCan API was market ready. Rustici’s overview of Tin Can API – they purchased http://tincanapi.com/ to spread the word – is miles better than the ADL site, so I would recommend that as the best place to get in-depth about Tin Can API in layman’s terms. Rustici’s main commercial website is actually http://scorm.com/ – these folks really love their SCORM! But it’s important to make the distinction that Rustici are a commercial venture involved in SCORM technologies and received funding to draft the Tin Can API standard – which is all completely fine and above board. But it is ADL who are managing the definition or the Standards. This had a few people confused that I spoke to, so it’s important to clarify.

What is the current state of Tin Can API?

So as at Oct 2012 we are currently in the specification review and revision stage.

  • Tin Can API is a Draft Specification currently at v0.95.
  • ADL should be publishing v0.98 by end of 2012 
  • ADL should be publishing the final v1 specification by end of Q1, 2013.

A key reason we are hearing so much about the API already is that some vendors are already implementing the Draft Specification. Foolhardy? I don’t think so. These people are betting that Tin Can API is going to be a game changer in learning technologies, and I think they are right. These vendors are very firmly in the ‘early adopter’ category and are undertaking this work on the basis that there will be some movement in the Specification between v0.95 and v1. However, as significant efforts were already made to pull any major changes into v0.95, the hope is that any remaining movement will only be minor refinements.

Tin Can API: “tracking the bejeesus out of everything!”

So that’s the lowdown on Tin Can API as far as I understand it. I invite comment and corrections and will amend my lowdown accordingly so please enlighten me if I missed anything. But before blindly accepting all this as the future of learning technologies, let’s spare a thought for @craigtaylor74 who tweets that he is “Hearing more & more about tracking the bejesus out of everything, wrapped up in the ‘Tin Can’ guise!”  The man has a point. You have to ask why we need to track all this data and what use does it serve. ADL are backing it because they think it will lead to a future of personalised, adaptive, just-in-time learning. Others will see a definite big brother angle to all this tracking.

There’s a possibility we are obsessing over the ability to track everything we learn, when what is more important is determining our learning NEEDS. I recently saw a video of work.com, the social performance management tool from Salesforce.com. It was pretty awesome, and totally focused on tracking employee GOALS and rewarding them for meeting those goals. Learning activities were not the focus, there was no ‘Jonny did this’ or ‘Mary did that’. It was all about ‘Mary met her goal’. A manager needs to know that an employee has met their goals and have visibility of failures so that learning needs can be established and met, and THAT’S where the focus on learning activities comes in. It’s not much use a manager knowing about every minor learning activity their team took if they don’t know how they are performing.

There’s a danger of being led by the technology here and that by focusing relentlessly on methods to track learning activities we will go down the wrong path as workplace learning practitioners. We need to make sure we are following best practice, user-centred design principles to stay on the right track when designing these new systems and architectures. When I first saw work.com it was clear that these people totally understood their users and hence their focus on goals not activities. So that’s my next big challenge. We are starting work at Epic already on implementing Tin Can API for GoMoLearning and maybe for Moodle too. So my challenge is to ensure we do this in the right way, led by the needs of our audience rather than just being wowed by the technology.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Posterous

9 thoughts on “Understanding Tin Can API

  1. Great blog post Mark. You have clearly encapsulated where The Tin Can API is at right now. I found your list of what Tin can API can be used for especially helpful. You are right to say that Tin Can can be confusing to newcomers, but I think after all this time that SCORM is still pretty confusing to newcomers, possibly more so. When I was first learning SCORM, I found some great overviews of what SCORM is, but very few actual guides on how to ‘do’ SCORM. That’s one of the reasons why I’ve tried to include some basic guides on http://www.tincanapi.co.uk.

    I think you’re a little harsh on Rustici in emphasising that the bookmarklet and book scanner app send to their LRS. The default LRS is set to the Rustici public LRS which is free to use and open to the public (everybody can see the data). This can be easily customised to send statements to any other LRS, including but not exclusively Rustici’s SCORM Cloud. You are right to say that their support for early adopter’s is exceedingly helpful and I have experienced this myself.

    Your overview of the history of Tin can API is spot on. You are right to say that the two names thing is confusing. My recent blog post ( http://www.tincanapi.co.uk/wiki/Blogs:Tin_Can_versus_Experience_API ) covers this very issue and puts forwards some options of how the two terms might co-exist in a slightly clearer manner. The definitions of Tin Can API and Experience API in the 0.95 spec represent one viewpoint (that Tin Can is the old name and Experience API is the new, correct name) but this is certainly not the only viewpoint. Responses to my blog from Aaron E. Silvers of ADL and Mike Rustici of Rustici Software are certainly worth a read too: http://www.tincanapi.co.uk/wiki/Talk:Blogs:Tin_Can_versus_Experience_API

    It’s probably also worth noting that whilst ADL are managing the development of Tin Can API specification now the plan for the future is for a new standards body to be created specifically for the purpose of looking after the Tin Can API and possibly other standards within the Training and Learning Architecture.

    Your point about the need to track goals and the meeting of those goals is important and this is something I really pushed for inclusion in the 0.95 spec. In the world of FE and HE where I am at the moment, we set learners targets and then assess whether or not they have met those learning targets.

    Have a read of section 4.1.4.3 of the 0.95 spec (http://tincanapi.wikispaces.com/file/view/Tin+Can+API+v0.95.pdf/365362588/Tin%20Can%20API%20v0.95.pdf ), in particular the second paragraph of the section headed “sub-statements”. This may not seem like a particularly large and important part of the specification, but this is where Tin Can allows for the tracking of aims, goals, targets and aspirations. There is still some work to be done in prototyping this functionality and developing a standard way of implementing this part of the spec, and I expect refinements to this section in the next version, but the functionality is there right now to be used. I intend to develop a prototype and write a guide on this “soon”, though as I do Tin Can as a hobby outside of my day job I never know exactly when I’ll find time!

    Your point about not being led by the technology is another great reason to get involved in Tin Can sooner rather than later. As the specification is still under development, there is still time for future users of Tin Can to get their use cases heard and considered right now. I’d recommend that anybody with an interest in tracking learning start at least prototyping with Tin Can, finding out the issues and then raising them on ADL’s Google group sites or weekly webinar. (details here: http://www.adlnet.gov/capabilities/tla/experience-api )

    Thanks for writing such a great overview of Tin Can API. I’m sure it will be extremely helpful to other newcomers as they start to get to grips with what Tin Can means for them.

    @mrandrewdownes

  2. I really like your take on goals and understanding performance/attainment, and while it is not explicitly in the spec as Andrew has pointed out, it can be done!

    Look forward to multiple LRSs providing different layers of value beyond the basic requirements. FYI, just added to our feature list ;)

    Thanks for posting!
    Ali

  3. I’m in the process of learning about this and, so far, have many more questions than answers. Looking at the Rustic information and examples leads me to ask “so what?” TC makes it easy to capture activities but how many of these activities are worth capturing? “I read the xxx webpage…I tweeted….etc” Suddently we are creating a huge repository of data that is of questionalbe value. Like a lot of social media it’s of limited value at best. Just because we can capture all this information doesnt’ mean we should!
    For years managers in HRD have focused on collecting information about activities…how many classes did we offer, how many seats did we fill, etc. In terms of evaluation we stick with level one, maybe leve two. So far TC strikes me as another way of collecting information about more activities. Information that has little practical value, at least at this point.
    The author says we need to focus on Learner NEEDS. My focus is on Learning outcomes. Does the learner perform better because of the learning experince and what about the learning experience(s) contribute to improved performance? I’m really not all that interested in collecting a bunch of data unless it provides answers to these questions.

  4. Hi Mark,

    I really appreciated your article about Tin Can API. Thank you!

    I was recently reading the SCORM docs and was hoping that things had advanced since 2004. Of course DoD has been quietly working away in the background. ADL is exactly what I’ve been looking for – thanks for the lead.

    A bit of background: In 2003, I worked for The Learning Group as a front end eLearning interactive developer. We were all frantically preparing for the ride of SCORM :) I was front end interactive so it was easy for me, just send the tracking cookies over. For my server side buddies, they had the hard work cut out for them as they were working on a compliance dashboard management system called LearningPath. It had to be SCORM compliant ready. Fun fun :)

    I finished up at The Learning Group at the end of that year and since then have worked on different parts of the digital ecosystem, content management systems and mainly in mobile development for education and publishing in the last few years.

    When doing mobile dev, the most important other person I need to work with is an API designer. If the API design is good – then we can scale production around that API. So, with systems architecture – we have to have an API first design.

    The common example is how Twitter focuses on API first design, and then there can be numerous client apps built around bespoke to whatever platform and organisation is required. This kind of API first design approach is what I am looking forward to seeing in eLearning ecosystems.

    For my current season of work, I have to step back from hands on development and switch back to eLearning architecture and advisory for education mobility. So when I reviewed the SCORM standard – it is as you say “doesn’t cut the mustard anymore”. It is very helpful, but it is a technology frozen in 2004 time, as if eLearning was exactly the same from when I left The Learning Group. Current systems need more complex learning objects.

    One of my favorite highlights over the past decade is the adoption of JSON.
    http://www.json.org

    I enjoy waving good bye to SOAP and XMLRPC whenever possible.

    To the Titans in the Moodle community – when JSON api has better support – the mobile dev community will rejoice and we will see a lot more moodle apps because the JSON api will allow pivoting legacy LMS systems into the app space.

    My previous R&D for Moodle on iPad last year still needed to use the XMLRPC standard. Mobile devs love JSON.

    http://threethirds.com.au/moodle-ipad-app

    XMLRPC is still nice. But JSON – there’s the love!

    API first design has to be a priority for future eLearning systems if we are going to have a helpful separation of concerns, loosely decoupled modularised systems and a separation of content from presentation that allows scalable device agnostic production around the content accessed via APIs.

    Basically all the patterns that our CS lecturers said that we should be doing but on the ecosystem level. I love what ADL is doing with its Training and Learning Architecture. It is moving us in the direction we need to go.

    I think you are right when you say that descriptive objects are not the be all and end all. We have to track description against goal completion. While outside of eLearning, I worked with these guys who were creating a campaign analytics system focused on email campaigns but also tracking across the digital relationship.

    The design patterns in email campaign analytics and the technology behind it is easily pivoted into learning analytics. For donor campaign management to feed business intelligence decisions, we need to track activity, but goal completion and progress too.

    In a group of 100000 emails, we can easily track the segment of high value donors (e.g. 5000 of 100000), progress (e.g. 4 out of 10 in staged progress) but also loyalty and engagement (e.g. keep coming back, spending a lot of time and also sharing). You can see that the technology and pattern for education is the same – it’s just a different genre.

    Learning analytics enables educators not just to see those that are scoring high, but also where people are stuck in progress but also whether there is high engagement with a particular topic and whether they are talking about it with their peers.

    The main advantages that email analytics developers have is that they already have the systems (NodeJS architecture, api first designs and analytics) in place and the newer dashboard technologies like HighChartsJS to pivot into this space. Even the flash dashboards have been slower to come across. SAP were strong with flash dashboards, but they needed HTML5 friendly solutions for iPads sooner. Highcharts has been a winner since they could be more agile and pivot faster.

    http://evoyrrelate.com

    is the analytics system that

    http://brownbox.net.au

    created. I’d be watching API first designers, SaaS developers, Scalable architecture designers and Analytics developers like Brown Box to pivot into the Learning Analytics space.

    There’s an upcoming conference looking at Big data and learning analytics in Australia. The last conference was brilliant on Unbundling Education services.

    Details should be posted here…

    29 October 2013
    Big Data & Analytics & Future of Higher Ed, Training & Work
    Integrating Australia with Asia conference # 7

    http://www.unbundlingeducationservices.com/

    If you need more info Pradeep Khanna organises the event.
    http://au.linkedin.com/pub/pradeep-khanna/3/574/b44

    I’m looking forward to seeing the more of the eLearning community there.

    Thanks again,
    Anthony

    .

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>