Our logical supply chains are getting ever better. The combination of well designed process, ubiquitous use of algorithms, and increasingly the benefits of machine (deep) learning ensure it.
However, as I learn more about the nature of machine learning in particular, I believe that unless we are mindful, we will end up with important schism.
There is an “emotional supply chain”. It starts when we become aware of the person or organisation we are tempted to deal with, and will only end out of wilful neglect.
To think this through, I often use David Rock’s SCARF model as a template when looking at issues of engagement. The model has five components, which are both flexible enough and well researched enough to be adaptable. The following categories are ones I have used in relation to the emotional supply chain:
Status. How do I feel in relation to you – superior, inferior or equal? Do I feel patronised, or respected?
Certainty. How confident do I feel you will deliver what you say you will? What’s your reputation?
Autonomy. How much control do I have in this transaction? How will you respond to questions? Am I more than a passenger in this process?
Relationship. How will we get on? Can I trust you? What are your values? Do you live them?
Fairness. Will you treat me fairly, or will this just be a transaction to you?
These five are based in comprehensive research, and align closely with other studies on engagement (David Rock’s work is well worth looking at)
Quite simply, no matter how good our logical supply chain, every time we hit a glitch in the emotional supply chain; say, a long queue, “our agents are unusually busy today”, an indifferent call centre operator, a challenge to your complaint – the list is extensive. Any of of these makes it more likely the transaction will be a one off, rather than an ongoing relationship.
For a while, I’ve been considering the impact of the “human” aspect of the design of algorithms on client engagement, and how we might improve that.
I’m now looking at the nature of machine learning – in effect algorithms designed by algorithms to reflect on the impact of that.
Machine Learning doesn’t need to understand why what it does works. It relentlessly applies varieties of A/B testing to find out What works – not Why, and in most cases couldn’t tell us why it worked if we interrogated it.
As Humans, we work the other way round. We need to understand what works so we can replicate it. This opens up a potential chasm between the humans in the relationship.
“Why did you do that?”
“I don’t know, but it worked”
This technology is hugely powerful and transformative, but if we want relationships to thrive, we need to involve the flawed, curious, mistake prone, but loving human.