I spent two days last week at the Learning Technologies 2017 exhibition, working on the LEO stand (below). This annual event is split over two floors, with a paid conference upstairs and free exhibition downstairs. The stand was really busy for both days and the whole team came away absolutely exhausted, but I did manage to wander around the exhibition looking to see what the trends were this year and seeking out interesting new products.
There has been lots in the news this past year about social media bias and echo chambers, which started gaining prominence when algorithms started meddling in your news feed. The major web companies collect a huge amount of data about you and in doing so are building a detailed profile comprising demographic data, likes and purchases and other data that has been captured and purchased. As you ‘like’ posts and pages, so the algorithm delivers similar content back to you. Your friends like certain things, or ‘people like you’ like certain things, and the algorithm delivers more of that content to you too. You search for and purchase certain things, and you get delivered content related to that. Maybe you even give away valuable data via an innocuous-looking Facebook quiz, which is then sold to highest bidder and fed into yet more algorithms to target you with stuff you might ‘like’.
The resulting and widely-discussed ‘echo chamber’ means people seeing content that mostly just panders to their existing world view, whatever that may be. With increasing numbers of people now consuming news through social media alone, this results in people being less challenged, less exposed to other opinions and events, with their views becoming ever more polarised and entrenched.
This post was edited on 09 May 2017 to add some clarity around authenticating with the Strava API.
In 2016 I made a commitment to myself to record every cycle ride I made. As both a leisure cyclist and cycle commuter, I was keen to know how far I rode in a year, what was the accumulated distance of my daily commute, what distance did I cover on my leisure rides. I already recorded my weekend rides in a phone app called Strava, so it was pretty easy to get into the habit of clicking a button on my phone every time I set off on a cycle commute too.
So as we ended 2016, my thoughts turned to what my end of year results would be in Strava. However, as it turned out the data presented to Strava users (shown below) is quite lightweight. It only provides a total distance cycled for the year, and while you can tag a ride as a ‘commute’, nothing is actually done with this data in the Strava interface and the end of year results are not split between commute and non-commute, for example.
MoodleMoot UK and Ireland 2016 showed yet again that the Moodle ecosystem is in good health, with lots of new community members attending for the first time, plenty of old timers coming back, major institutions reaffirming their faith and Moodle HQ showing how the product itself is adapting to the future with new features and new sectors in its sights.
There was far too much going on for a detailed write-up, but for me personally there were a few clear themes from the event this year:
- Moodle Mobile native app is coming of age
- Moodle ecosystem is as strong as ever
- Major institutions are reaffirming their support for Moodle
- Moodle is strengthening its position as a workplace LMS
As learning analytics continues to rise up the agenda in the corporate learning & development (L&D) sector, one thing is becoming glaringly apparent: we should not expect a one-size-fits-all, off-the-shelf approach to learning analytics. This is a specialist discipline that cannot be bottled up into a single product. Sure, there are products such as Knewton, a Product as a Service platform used to power other peoples’ tools. There are also LMS bolt-ons like Desire2Learn Insights or Blackboard Analytics but even they are not sold as off-the-shelf products, for example the Blackboard team “tailors each solution to your unique institutional profile”. There are just far too many organisational factors at play for an L&D practitioner to be able to implement a learning analytics programme using an off-the-shelf tool.
What a learning analytics platform looks like
An example of one platform (not a commercially available product but probably the most advanced learning analytics platform I’ve yet seen) is the Open University’s OU Analyse platform. They demonstrated this at MoodleMoot UK and Ireland recently. The product is very geared to the OU’s own Moodle-based VLE and as such is built to answer their own questions. This predictive analytics platform analyses demographic and course data from their own VLE with a view to predicting which students are likely to fail. Tutors have a login to the system and can use the dashboard tools to determine which learning interventions to recommend to a student in order to get them back on a path to success.
This year was my fourth Moot and it was another cracking event. Dublin is a welcoming and accessible location so it was good to be back here. I attended the two conference days on May 12-13, but the conference was topped and tailed by a workshops day on the 11th and a developer hackfest on the 14th.
First ever workplace learning stream
Of particular interest for me was that for the first time the conference featured a workplace learning stream. Despite Moodle topping multiple surveys of the the most widely used workplace LMSes, previous Moots have typically been dominated by the education sector. It’s great to see the focus gradually shift and the conference become more representative of real world Moodle users. It was nice to hear Moodle HQ presenters reinforce that future Moots would be structured along similar lines.
The workplace stream consisted of case studies from:
- Civil Service Learning
- University Hospital Southampton
- Health and Safety Authority
- An Irish law company
- A US Healthcare company
There was also a good analysis of the business impact of long term support vs yearly upgrades from the conference organiser, Gavin Henrick. The workplace stream was well attended, and I look forward to more of the same in future years! Continue reading “MoodleMoot 2015 Review”
The xAPI Barcamp at the end of the first day of the Learning Technologies conference attracted around fifty people, eager to talk xAPI over a few free drinks at the local pub! I was one of five invited experts alongside Andrew Downes from Rustici (@mrdownes), Mark Berthelemy from Wyver Solutions(@berthelemy), Ben Betts from Learning Locker (@bbetts) and Jonathan Archibald from Tesello (@jonarchibald). Moving around five tables in turn, each expert began by talking for a few minutes about what they were doing with xAPI, then the table held an open discussion.
— Aaron E. Silvers (@aaronesilvers) January 30, 2015
//platform.twitter.com/widgets.jsI found the event fascinating. Having worked on a few xAPI projects for clients I had some solid work to discuss, however I personally still have more questions than answers about xAPI so this event was the perfect forum to pose some of those questions and find out what other practitioners were doing and thinking. Continue reading “xAPI Barcamp – a Learning Technologies fringe event”
I attended DrupalCampBrighton today for their Business Day, the first day of a three day Drupal extravaganza! LEO were sponsoring the business day which I was really pleased about, we do a fair amount of Drupal work and it’s great to both give something back to the open source community and to get involved in supporting local events. The event was attended by about 60 people at Brighton Media Centre, with the rest of the weekend focussed on more developer-oriented stuff. Today was all about case studies and keynotes though, a bit more at my level! It was a really great event and I came away enthused and energised for all things open source. A great way to end the week!
Before continuing, thanks to all the organisers for putting this on. It’s no small feat putting together an event like this. On Saturday and Sunday there are three speaker streams plus all-day developer sprints. It’s a hugely impressive setup, supported by a team of 8 staff volunteering their time and 15 local sponsors. Well done to one and all for a superb event!
Using open source to drive change
The day started with Jeffrey “Jam” McGuire from Acquia (@HornCologne), a man with the rather natty job title of Open Source Evangelist and a pretty awesome handlebar moustache to boot. Jam gave a half hour keynote that was the best intro to open source I’ve seen in years, although he was certainly preaching to the converted with this crowd. More interesting was his focus on how open source is a driver for business and government transformation. He recalled a conversation with the UK Cabinet Office in which they noted that before mandating open source in government procurements, the map of UK government software spend was centred on the area between Reading and London where Microsoft, Oracle, IBM etc have their UK presence. Since open source that map has blossomed out across the entire country with spending going to SMEs nationwide who can deliver mature and robust open source solutions to government at a fraction of the price. Crucially, that doesn’t only support SMEs but keeps money in the UK instead of going off to some Redmond bank account.
For all the talk of big data being the next big thing in learning technology, few people mention that in workplace learning there just aren’t any examples of big data to speak of. The data collected just isn’t at the same scale. However, big data has led to an explosion in data analysis tools and techniques that learning technologists can use in their work. Throughout 2014 I’ve been dipping into data science MOOCs, learning the basics of R programming, and thinking about how to apply this within learning and development. These are some of my initial thoughts and notes.
Can understanding big data techniques help us to improve learning outcomes and performance?
Big Data as a term started appearing following the success of online services such as Facebook, Google Search and Twitter which gather data on hundreds of millions of people. Data including their likes and dislikes, online behaviours, website usage patterns, shopping patterns; it all has value and can be sold to the highest bidder. Now that users can also register for other online services using their Facebook, Twitter or Google logins, they literally leave a trail of ‘digital exhaust’ behind them. This data is all collected and analysed on the assumption that it is valuable to someone, somewhere, or at least may be one day. The data gathered by just one service like Facebook amounts to over 500 terabytes per day! This is the scale that big data operates at, and the harvesting of personal data is BIG business. Jaron Lanier is not wrong in suggesting that next time you post a status update, they really should be paying YOU!
Edtech and learning technology entrepreneurs clearly want a slice of this action, hence the buzz. However, even the largest organisations only have relatively small amounts of learning related data. Even an organisation with half a million employees will only have learning related data measured in little old Gigabytes. That’s not big data at all.
However, if there is one big takeaway from the big data world then it is the renewed focus on data analysis and data driven insights. Take a look at any MOOC catalogue to see the popularity of data science courses. Continue reading “Data science: the new skillset for learning technologists”