Understanding People using AI (and bots!)

‘Call and response’ bots are nothing new, but by making use of AI and Cognitive Services can we build bots which make a positive impact?

Tom Hind
6 min readOct 21, 2020

Over the past 15 months at Akari one of my key focusses has been building out bot and application solutions internally and with our customers from mid-enterprise organisations. Bots and applications within chat applications like Slack and Microsoft Teams are nothing new and typically connect to a larger platform application to carry out tasks or retrieve data to present to a user in a conversational format.
Bot technology has seen a resurgence in the last few years with the major cloud platforms offering different ways to build bots and run them without deploying infrastructure. The key technologies I’ve used across deployment and experimentation are:

Within Akari we utilise the Azure Bot Service and Bot Framework for all bot development, this allows us to make easier use of Cognitive Services as well as key cloud-native technologies to scale-out bot deployments.

Outside of the platform components of bot building, one of my key focusses has been looking at how we can understand people through conversational interfaces. We started the development of our virtual assistant AVA (a virtual assistant built for Microsoft Teams) with a focus on accessibility and assisting users who may need extra help finding information or being prompted to carry out tasks or connect with peers. AVA grew with a focus on education, professionals and we investigated the potential ‘skills’ which AVA could have in which began the first lesson. When you look into the Teams app store or the app store on your phone most of the apps do one thing very well most of the time, very few applications are hub applications leading to other services and solutions. For the most part, organisations are moving away from central/pane of glass applications to modular services which users pick and choose when to engage with. So at the start of this year, we began to trim what AVA could be down into two feature sets; FAQ & Wellbeing. This allowed us to focus in on the data which we would collect.

From this, our goal quickly turned to identify needs and requirements. As individuals interact with a chatbot is it possible to identify through a text-only medium where an individual might need extra assistance, help or intervention depending on the interaction context. This might be assisted learning tools or use of accessibility tools across the web or productivity tools. So is it possible to identify any individuals needs or requirements based on how they interact with a chatbot? People tend to interact with bots or digital interfaces in a different way than they might interact with a person. I for one would probably be an example of this when I was a student, I probably wouldn’t have asked for help via a lecturer or via an official channel. A chatbot or virtual assistant can offload that sense of formality when capturing information from a person, this is something we’ve seen across education and business as we’ve built applications out.

The key technical goal was to use machine learning to identify where outside of accessibility requirements and assisted needs could somebody be in a vulnerable situation or not be happy with the situation which they are in, outside of contexts where a person may require intervention due to thoughts expressed to the bot.

From interactions with any bot using text or speech, the typical output will be some sort of transcript, either through a speech-to-text service or directly capturing transcript messages through bot state and logging, typically anonymously or using application-specific identifiers depending on the use case, it should then be possible to analyse this data. As nice as word clouds might look they generally provide very little insight on a more defined scale as to overall thoughts and interactions unless you’re searching for a specific term outside of context. Analysing this data in a meaningful way then had to be contextual or use case driven, an educational organisation will be assessing how users are utilising a bot in a significantly different way to a commercial or non-profit organisation but there will always be cross-over in data points collected or insights which are to be inferred from the data.

Typically bots will be designed to carry out a task or display information to a user to take action from. When bots are designed to have a personality which may be professional, witty or even caring depending on its audience this is also important as it can affect what information a user might present to the bot as if it was a conversation. To understand how people interact with a bot we had to look at a few points:

  • How are people asking questions to our bot?
  • What types of questions are people asking?
  • Are people sending general chit chat or attempting conversations with the bot?
  • Are people repeating spelling mistakes or asking questions in a wildly different way to peers?

Where we built-in wellbeing to AVA we also wanted to bot to ask users on a schedule natural to them questions about wellbeing and mental health. Usually, two questions a time which are presented in a friendly feedback gathering manner. These questions could be anything from “How are you feeling” to “Do you feel like to need to connect with your manager”, again driven by context. A student may be asked about sexual health or interactions with peers. From this user-driven data, we can then identify if someone has started a feedback loop or requires more information as well as anonymise and feed that text into a data store.

With enough transcript message data, we were able to filter this down and begin to think about what questions we wanted to ask of our data as well as match these questions to solutions within Azure. The three key identifiers we wanted to analyse from this data were sentiment & opinion, spelling & resent queries and overall text analysis. This allowed us to gather general sentiment for targeted feedback and report this as a metric to an organisation as well as use the sentiment analysis to tune how our bot interacted with a user, a significantly challenging point we are trying to analyse is how can we successfully identify opinions which users are giving through wellbeing feedback which could be an opinion on a recognised entity within that text and train the ML services model to consistently recognise entities. With the focus on accessibility running analysis of how people spell or use grammar as well as resend corrected queries to a bot, or repetitive mistakes allowed us to identify groups or pools of users who may need extra assistance when using the tool or another service which the bot can answer questions on. Across those two points, the overall text data analysis allows us to then identify vulnerable triggers, for example in a care or education context if an individual expresses that they do not feel safe or happy with the situation which they are in and should extra support be provided.

A key difficulty when building a bot which is interacting with individuals as an assistant rather than a task bot is getting users to interact with the bot in a way which they feel safe positively providing information, so if a conversation is dropping sentiment should a bot pivot in personality and switch to a caring or professional personality model depending on the opinions being expressed by a user.

One of the main lessons learned from building AVA and bots is that it is easy to throw technology and solutions at data and attempt to infer information and statistics from the data being collected, outside of building a machine learning model and training it pre-built solutions make it easy to do this and not extract any immediate value, having questions which are to be asked of the data is key.

Making a positive impact through a chatbot has always been an interesting goal and from the above requires thought on what the bot is going to ‘do’ when interacting with users but also questions to be asked on the data which the bot can collect outside of a pure data collection context. I’ve built and used chatbots for FAQ, Wellbeing, application assistance and designed digital interfaces for interacting with populations who typically may not provide data directly to a person which I will write about in the future.

When thinking about how bots can make a positive impact it is entirely possible not only through the data which is collected and the information which can be gathered from it but from the information it is possible to provide an individual with assistance or support.

When thinking about how bots can make a positive impact it is entirely possible not only through the data which is collected and the information which can be gathered from it but from the information it is possible to provide an individual with assistance or support.

--

--

Tom Hind

Working in the Cyber Security space with a focus on cloud service enablement.