Course notes: Designing an inclusive chatbot

Reading Time: 2 minutes

I recently completed the Designing a Feminist Chatbot course on FutureLearn. As a starter course it has a lot going for it, covering areas such as different chatbot uses, how chatbots can become biased, user centred design (UCD) principles, persona creation, conversation design, storyboarding and prototyping. Having studied UCD at degree level and worked on countless software design projects in my career this could have become a bit boring but it’s always good to revisit the basics and is especially interesting to apply existing knowledge to new problems. There were some fascinating new areas to me such as conversation design and chatbot personality design, and the course drew heavily on the Google Conversation Design Process which is a great resource in itself and has some useful canvas-style templates for guiding the development of a chatbot.

While the feminist angle was interesting, for me the course was more about developing an inclusive chatbot rather than specifically a feminist one. The taught materials focussed on UCD principles around knowing your users and designing for them, which is decades old stuff, but then added more modern topics such as understanding how biases can be surfaced and how stereotypes can be reinforced by your design. This all becomes especially important when designing conversations, characters and chatbots. I feel much better armed about inclusive chatbot design now than I do when I went into this course, so it has served its purpose well and helped me to identify some areas I’d like to dive deeper into.

The final part of the course was a coding section. Given the prevalence of no-code chatbot tools in the market this seemed a bit unnecessary. While doing this course I was also researching no-code and low-code chatbot development tools and have selected FlowXO, so I would have preferred to apply the taught materials in my tool of choice. But I’m happy that now I have some of the core principals of chatbot design and a development tool to work with, I’m ready to start doing some CPD around using chatbots for learning!

#MakeoverMonday Week 9 dataviz submission

Reading Time: 2 minutes

This week’s dataset was from European Institute for Gender Equality and shows the proportion of seats held by women in European parliaments and governments. Another great dataviz learning experience and on the whole I’m OK with the finished product this week, I know I still have a big journey ahead of me to create some decent work, but it feels like a big improvement and tangible progress on my previous effort. I used the timeline slider for the first time and explored Tableau’s formatting tools in more depth. I also applied some learnings from last time by going straight for portrait mode, using a decent font size, avoiding cognitive overload and keeping the screen elements to a minimum, leading with the key finding and then fleshing out the detail further down.

Learning dataviz with Makeover Monday

Reading Time: 4 minutes

I’ve been meaning to take part in Makeover Monday for some time as a way to improve my data storytelling skills. This weekly learning event has been running for a year or so and I love its simple but effective format: a data vizualisation and accompanying dataset is released at the start of each week and you simply read the brief, analyse the dataset and make over the visualisation, submitting your work into the Twitter dataviz community for feedback. I am a big fan learning by doing, so while a 10 week Coursera on Data Vizualisation might be interesting, I don’t think it would be nearly so useful as just getting stuck in with some open data sets and trying things out, getting feedback from the dataviz community and iterating your work. Active and social learning at its best!

Look out, here comes Microsoft!

Reading Time: 3 minutes

Towards the end of 2020 our Head of Learning Design asked me to do a short presentation to her design team about where I thought the main disruptions to the Learning & Development technology market would come from in the year ahead. Usually we would look at startups, niche suppliers or parallel industries to identify potential disruptors. But if 2020 taught us anything, it was to think differently and look elsewhere for what could turn markets upside down! 

Why everybody should put users first

Reading Time: 4 minutes

You don’t need to be on the product design team to have an interest in user experience. Putting the user first is part of all of our jobs in software and product development, as important as putting the customer first is to a business or service. User centered design (UCD) is the beating heart of all good product development so it’s beneficial for everybody in the team to develop a solid understanding of its principals and to put the user first at every opportunity.

Rule-based vs AI adaptive learning

Reading Time: 7 minutes

Nodes

Adaptive learning uses competence, behavioural and demographic data to tailor a digital learning experience around each learners unique needs. There’s a lot of hype around this area which might have you thinking its all about Artificial Intelligence (AI), but that’s not the case and there are two types of adaptive learning approaches: AI-based and Rule-based. Each will afford you different features, benefits and outcomes.

Reflections on agile product development

Reading Time: 10 minutes

I recently moved teams and role following a company restructure and merger, which led me to reflect on my last three years. One of the reasons I had taken the role was to gain more experience in agile product development. I’d worked in open source product development for over a decade, and on a number of agile projects, but in order to grow and develop as a software engineer and technology lead, I wanted more direct experience of product development and technology leadership with a small, agile scrum team. 

A vendor view of Learning Technologies 2017

Reading Time: 5 minutes

I spent two days last week at the Learning Technologies 2017 exhibition, working on the LEO stand (below). This annual event is split over two floors, with a paid conference upstairs and free exhibition downstairs. The stand was really busy for both days and the whole team came away absolutely exhausted, but I did manage to wander around the exhibition looking to see what the trends were this year and seeking out interesting new products.

Algorithms and echo chambers in the world of learning

Reading Time: 3 minutes

There has been lots in the news this past year about social media bias and echo chambers, which started gaining prominence when algorithms started meddling in your news feed. The major web companies collect a huge amount of data about you and in doing so are building a detailed profile comprising demographic data, likes and purchases and other data that has been captured and purchased. As you ‘like’ posts and pages, so the algorithm delivers similar content back to you. Your friends like certain things, or ‘people like you’ like certain things, and the algorithm delivers more of that content to you too. You search for and purchase certain things, and you get delivered content related to that. Maybe you even give away valuable data via an innocuous-looking Facebook quiz,  which is then sold to highest bidder and fed into yet more algorithms to target you with stuff you might ‘like’.

The quantified cyclist: analysing Strava data using R

Reading Time: 6 minutes


This post was edited on 09 May 2017 to add some clarity around authenticating with the Strava API.


In 2016 I made a commitment to myself to record every cycle ride I made. As both a leisure cyclist and cycle commuter, I was keen to know how far I rode in a year, what was the accumulated distance of my daily commute, what distance did I cover on my leisure rides. I already recorded my weekend rides in a phone app called Strava, so it was pretty easy to get into the habit of clicking a button on my phone every time I set off on a cycle commute too.