The Emotional Supply Chain?

Our logical supply chains are getting ever better. The combination of well designed process, ubiquitous use of algorithms, and increasingly the benefits of machine (deep) learning ensure it.

However, as I learn more about the nature of machine learning in particular, I believe that unless we are mindful, we will end up with important schism.

There is an “emotional supply chain”. It starts when we become aware of the person or organisation we are tempted to deal with, and will only end out of wilful neglect.

To think this through, I often use David Rock’s SCARF model as a template when looking at issues of engagement. The model has five components, which are both flexible enough and well researched enough to be adaptable. The following categories are ones I have used in relation to the emotional supply chain:

Status. How do I feel in relation to you – superior, inferior or equal? Do I feel patronised, or respected?

Certainty. How confident do I feel you will deliver what you say you will? What’s your reputation?

Autonomy. How much control do I have in this transaction? How will you respond to questions? Am I more than a passenger in this process?

Relationship. How will we get on? Can I trust you? What are your values? Do you live them?

Fairness. Will you treat me fairly, or will this just be a transaction to you?

These five are based in comprehensive research, and align closely with other studies on engagement (David Rock’s work is well worth looking at)

Quite simply, no matter how good our logical supply chain, every time we hit a glitch in the emotional supply chain; say, a long queue, “our agents are unusually busy today”, an indifferent call centre operator, a challenge to your complaint – the list is extensive. Any of of these makes it more likely the transaction will be a one off, rather than an ongoing relationship.

For a while, I’ve been considering the impact of the “human” aspect of the design of algorithms on client engagement, and how we might improve that.

I’m now looking at the nature of machine learning – in effect algorithms designed by algorithms to reflect on the impact of that.

Machine Learning doesn’t need to understand why what it does works. It relentlessly applies varieties of A/B testing to find out What works – not Why, and in most cases couldn’t tell us why it worked if we interrogated it.

As Humans, we work the other way round. We need to understand what works so we can replicate it. This opens up a potential chasm between the humans in the relationship.

Why did you do that?”

“I don’t know, but it worked”

This technology is hugely powerful and transformative, but if we want relationships to thrive, we need to involve the flawed, curious, mistake prone, but loving human.

The Glitch

Interesting exchange between a passenger and a ticket inspector on a journey into St. Pancras on Weds morning.

Passenger asks for ticket. Inspector takes payment via card. Debit apparently shows on passengers a Bank app, but due to poor WiFi, credit does not show on Inspector’s machine.

Which is where it got interesting. Both passenger and Inspector courteous and polite; no aggro, but the centre of the conversation became the machine. The machine became a glitch:

– “you’ll have to pay again, because my machine doesn’t show the payment”

– “but my app says I have”

– “well, you’ll still need to pay again. My machine doesn’t recognise your payment.

If you end up paying twice, we’ll refund it”

What really struck me was that algorithms were centre stage. At no point did empathy show up. End result was an Inspector who spent probably 20 mins politely aggravating a customer who was convinced she had already paid.

Lose/Lose for the humans, win for the algorithms.

And both humans just accepted it.

Those businesses that can put humanity centre stage are the future winners.

Do algorithms need psychotherapists?

I’ve become more and more curious about the power of algorithms. They are wonderful things that can take mind numbing hard work out of routine processes, freeing humans to do more meaningful work.

However.

Who writes the algorithm? When you think about it, whoever writes the algorithm passes on their own worldview, biases, heuristics and experience into eternal digital form. That’s a thought.

Algorithms are normally written by engineers – people with powerful, logical brains, skilled in getting from A to B via the most direct route. It is therefore not too fanciful to imagine that the algorithms they write take on something of their creator’s psyche.

Engineers are vital to our society, and we don’t have enough of them. That said, whilst all are different, software engineers, generally speaking, are not renowned for empathy.

So, if I want to create an algorithm for a customer service interface, who should write it? – somebody with really good programming skills, or somebody with real empathy?

I came across this article from Psychology Today as I explored this idea, and it gave me real pause for thought.

There’s a view that up to half of routine jobs will be replaced by algorithms of various flavours by 2030. That’s (probably) an extreme estimate, but even if only partially correct, we will be replacing flawed, inefficient, but essentially human agents with (by definition) soulless algorithms in significant numbers.

That has potential to create emotional havoc.

There’s a huge opportunity here – to combine digital efficiency with empathy and compassion. It will require however a holistic approach to design – one which better represents the way we work together as humans.

As I’ve explored this area, I’m beginning to understand that just a few seconds working with an unsympathetic (but logically efficient) algorithm can very quickly screw up a potentially good experience and relationship.

The future of brand reputation relies, I suspect, on getting the right balance between the messy human and the efficient algorithm.