Respect your #FinTech chatbot!

Respect your #FinTech chatbot!

Imagine you've been queuing ages in the rain for an ATM only to find it's now out of cash. Or you've been sent in some endless loop by your bank's automated telephone system for the past half hour. I think a fair few of us may be forgiven a quiet expletive. The anger's not really directed at anyone, it's just an outlay for the helplessness of the situation - and anyway, the stupid ATM doesn't care - right? Well that may be about to change.

Chatbot natural language processing (NLP), the AI part that converts human language into machine instruction, has reached a tipping point. It has progressed beyond just an understanding of the questions/commands to also understanding the tone in which they are given. This poses the interesting ethical and moral question - should a chatbot tolerate bad language/abuse?

I put this exact question to my (@cgledhill) twitter audience last week and got 112 responses with a surprising 34% saying that chatbots should tolerate abuse/band language. I don't believe we have reached the point where a chatbot could be considered to have hurt 'feelings', but feelings or not I think there are some good business reasons why a financial institution should care about the tone used with their chatbots.

Firstly let's go back several decades to a time before computers. If you needed a loan you put on your best suit and went to see your bank manager. You were polite & courteous. You knew that the impression you made was just as important as the numbers you provide. The banker had total discretion to accept/deny a loan application and responsibility for a bad decision. Abuse/bad language would be a quick route to the door and block on further dealings with the bank. These days bank staff have nearly zero discretion, they follow a script, they and are mostly a human front-end to a computer process for people not willing/able to deal directly with said computer. Through chatbots, banks have an opportunity to better understand their customers and go the full circle to use the customer's personality to influence the terms and conditions of their business relationship, e.g. credit score customers based on psychometric analysis of their chatbot interactions. Think this can't happen? Well head over to IBM Watson and paste in the last email you sent and see what their AI deduces about your personality - you'll be shocked at the accuracy!

Given the knowledge that chatbot interactions impact credit score or benefits/rewards, I expect many would self-censor our interactions. But what of the rest? Should abuse/bad language still be tolerated? I believe not. Ultimately, if our goal is to make chatbots interact in a more human way, indeed our goal for AI in general is to be more 'human', then allowing abuse of chatbots dehumanises the very thing we're seeking to be more human.

So what are the alternatives? Well blocking customers doesn't seem like good business sense unless it forms part of a wider strategy (e.g. "the bank for nice people!") and referring angry customers on to a human operator seems like rewarding bad behaviour and is not much fun for the poor folks manning the call centre. Better to encourage better behaviour through incentives/penalties. We've seen this in the Sci-fi film Demolition Man where citizens get fined a credit for bad language.

Banks are in a good position to implement a "swear jar" type mode given they hold customer deposits but fining customers won't do much for engagement. The solution I believe is far more subtle and comes down to respect. Chatbots could soon be the primary customer touchpoint for a lot of brands. If customers don't respect the chatbot then they don't respect the brand. Traditional "customer is always right" mantra can be abandoned by chatbots in favour on an algorithmic approach whereby the respect/service/terms the customer receives are proportional to the respect/loyalty/cooperation they return. A challenger brand may decide to lower their respect threshold to win ruder customers but that may well cost them more in the long run.

It may not come down the choice in the end. This week the European Parliament proposed a vote on giving Robots legal status as "electronic persons". The intention is to hold AI responsible for their "acts or omissions" and establish basic ethic principles. The regulation is weighted in favour of their human masters but it'll not be long before a "robot rights" lawyer points out that it's unfair to hold a robot responsible for their acts if we don't allow them to mitigate risk to the best of their abilities.

If all that hasn't persuaded you to respect your chatbots then let me leave you with one last thought. In the 1980s film Short Circuit 2, Johnny 5 realises humans can be both good and bad and takes action accordingly. If AI does reach a "singularity" moment in our lifetimes and gain consciousness then it'll form an opinion of you. That opinion won't be based on first impressions at that point in the future but by the transcripts of chatbot interactions you've made going back years. AI's first impression of you starts now!


Robot Image Source: IBM Watson

Michael Novak

Responsible AI Business Exec | Leveraging ChatGPT AI, Digital Identity & Web3 to drive value.

7y

There is no way in %&!!@!! that I'm going to let a bot tell me what I *can* and can't &!^!!@! do. Then again, I remember a (updated) quote by Steven Wright "Be nice to your bots. After all, they are going to choose your nursing home".

Like
Reply
Alessandro Eskandar Hatami

Managing Partner @ Pacemakers.io | Fintech Innovation Strategist | Author | Non-Exec Director | Mentor | Investor

7y

If someone wants to abuse a chatbot it's the same as someone putting abusive language in the search box on a website or app. The fact that the abuse generates no useful outcome for the abuser should be punishment enough. That said verbal abuse can quickly escalate to physical violence - in that case i would make sure the vessel protecting the chatbot is strong enough and I would remind the abuser that vandalism will be prosecuted.

Like
Reply
Danny Matthews

Traction & Funding for Early-Stage Startups | Pre-Seed Accelerator Director | Startup Advisor | Speaker | For Progress 🌍

7y

Great POV, man or machine - relationships are reciprocal

Like
Reply
James McLeod

Open Source Program Lead at NatWest Group - London.JS & Big Boost Mondays Founder - “Open Source, Opens Doors” 🌈💻✨

7y

Interesting point of view. I would expect interactions with chatbots to follow the same respect trends as their human counterparts. The consequences of bad customer behaviour should be the same.

To view or add a comment, sign in

Insights from the community

Explore topics