Reading Time: 3 minutes
There has been lots in the news this past year about social media bias and echo chambers, which started gaining prominence when algorithms started meddling in your news feed. The major web companies collect a huge amount of data about you and in doing so are building a detailed profile comprising demographic data, likes and purchases and other data that has been captured and purchased. As you ‘like’ posts and pages, so the algorithm delivers similar content back to you. Your friends like certain things, or ‘people like you’ like certain things, and the algorithm delivers more of that content to you too. You search for and purchase certain things, and you get delivered content related to that. Maybe you even give away valuable data via an innocuous-looking Facebook quiz, which is then sold to highest bidder and fed into yet more algorithms to target you with stuff you might ‘like’.
The resulting and widely-discussed ‘echo chamber’ means people seeing content that mostly just panders to their existing world view, whatever that may be. With increasing numbers of people now consuming news through social media alone, this results in people being less challenged, less exposed to other opinions and events, with their views becoming ever more polarised and entrenched.
Add some human bias like at Facebook, and you get political bias in the mix too, take humans away completely and the machine turns into a neo-Nazi. There are so many problems with algorithm-driven web content and we are only just beginning to understand the problems, and are only really at the stage where major web providers are still just prototyping solutions to fix these issues. It’s early days indeed.
Jumping over to the learning and development world, there is much talk of adaptive learning, personalised learning, micro learning and so on, with algorithms at the heart of this new world. The marketeers would have you believe this is the golden new dawn of learning and new start-ups are emerging like Knewton, Wildfire, Dreambox, Axonify and others, all heavy on marketing hype about algorithms, AI and machine learning, but low on evidence of results. It’s even earlier days for algorithms in delivering learning content than it is with web content. This stuff really is largely untested.
If we take what has happened with web content and apply it to online learning, we arrive at a possible outcome where the machine learns about the learning content you like and your preferred learning styles, and makes suggestions for you based on that. It also learns what ‘people like you’ like, maybe others in your demographic segment, your profession or your department, and makes suggestions for you based on that too. Human bias is introduced by HR controllers introducing things they believe you should consume.
With the next generation of learning technology trying to constantly deliver you learning content based on collected data from your past and your network, the result could well be a narrowing of learning experiences and reduced breadth of knowledge, an echo chamber of learning content, much like happened with web content. Just like this is not good for running a democracy, it cannot be good for running a healthy organisation, which relies on staff bringing a wealth of knowledge and experience to the myriad of problems they need to solve. We need people to look wide and far in their learning.
Ultimately these new learning products are trying to use data about you and others like you to predict what learning will be useful to you in the future. You might not have expected old-school management theorist Peter Drucker to lend much to the AI world, but he made a nice comment that sums up my thoughts on this:
“Trying to predict the future is like trying to drive down a country road at night with no lights while looking out the back window. ”
Or, to put it another way, we have no idea where we’re going and it’s going to be a bloody bumpy ride!
With thanks to my LEO colleague @rahaddon whose conversation inspired this post!