The Emotional Supply Chain?

Our logical supply chains are getting ever better. The combination of well designed process, ubiquitous use of algorithms, and increasingly the benefits of machine (deep) learning ensure it.

However, as I learn more about the nature of machine learning in particular, I believe that unless we are mindful, we will end up with important schism.

There is an “emotional supply chain”. It starts when we become aware of the person or organisation we are tempted to deal with, and will only end out of wilful neglect.

To think this through, I often use David Rock’s SCARF model as a template when looking at issues of engagement. The model has five components, which are both flexible enough and well researched enough to be adaptable. The following categories are ones I have used in relation to the emotional supply chain:

Status. How do I feel in relation to you – superior, inferior or equal? Do I feel patronised, or respected?

Certainty. How confident do I feel you will deliver what you say you will? What’s your reputation?

Autonomy. How much control do I have in this transaction? How will you respond to questions? Am I more than a passenger in this process?

Relationship. How will we get on? Can I trust you? What are your values? Do you live them?

Fairness. Will you treat me fairly, or will this just be a transaction to you?

These five are based in comprehensive research, and align closely with other studies on engagement (David Rock’s work is well worth looking at)

Quite simply, no matter how good our logical supply chain, every time we hit a glitch in the emotional supply chain; say, a long queue, “our agents are unusually busy today”, an indifferent call centre operator, a challenge to your complaint – the list is extensive. Any of of these makes it more likely the transaction will be a one off, rather than an ongoing relationship.

For a while, I’ve been considering the impact of the “human” aspect of the design of algorithms on client engagement, and how we might improve that.

I’m now looking at the nature of machine learning – in effect algorithms designed by algorithms to reflect on the impact of that.

Machine Learning doesn’t need to understand why what it does works. It relentlessly applies varieties of A/B testing to find out What works – not Why, and in most cases couldn’t tell us why it worked if we interrogated it.

As Humans, we work the other way round. We need to understand what works so we can replicate it. This opens up a potential chasm between the humans in the relationship.

Why did you do that?”

“I don’t know, but it worked”

This technology is hugely powerful and transformative, but if we want relationships to thrive, we need to involve the flawed, curious, mistake prone, but loving human.

The Sociopathic Algorithm

In the last few days I have noticed some interesting issues on the downsides of automated systems.

First, British Airways, having been unable to resolve a pay and conditions dispute with their pilots chose to accept strike action. One of the consequences has been customers being informed of cancellations in aless than organised way.

Then, Transport for London making damaging mistakes of implementing the congestion charge – essentially refusing payment and then fining people who had tried to pay.

There are doubtless many others – these were just ones that caught my eye.

It set me thinking. As we increasingly transfer routine processes from people to algorithms, business focus is principally on the efficiencies but we haveintroduced a new player into any process that impacts humans.

If we’re dealing with a call centre, or a customer services representative, it is a human to human interaction. It may be constrained by controls and scripts, but we’re still dealing with a human. There will be a gap, depending on culture, communication and empathy – the “further away” we feel from whomever we are talking to, the less recognised we feel – but we’re still dealing with a human.

As that process gets transferred to automated systems, we’re dealing with an algorithm, and the nature of our interaction with it is determined by the skill – and importantly the empathy, of the team that writes it. When we deal with the algorithm, we’re not dealing with a human, we’re dealing with a proxy. It shows when things don’t go according to script or design.

So I think we have to be careful. It’s easy to offload messy emotions by creating algorithms to avoid the ocaasional messiness of human interaction, but the risk is high. As has often been said, it takes a lifetine to build a reputation, and seconds to destroy it.