Back To Insights
Insights

A Tale of Two Sciences

Alright, yeah. The business world went a little data-mad. We knelt to worship at the altar of big data. It was all we needed!

 

“Data – not oil – has become the world’s most valuable resource,” opined The Economist in 2017. And so, we dived in head first, keen not to be left behind by the ‘transformational’ promise of what data-science could do for us – and the bottom line.

 

We trusted in data. We had more than we knew what to do with.

 

“The data doesn’t lie,” we said to one another.

 

Until it did.

 

Many, many times.

 

Because, although some models may be autonomous, data on its own is not. Data doesn’t speak for itself; it requires analysis and interpretation.  

Human judgement is an integral part of good data analysis, and in instances when we omit this critical, inferential aspect of data modelling  it can have catastrophic effects. 

Enter the hall of machine learning failures; Amazon’s AI recruitment tool for technical roles that recommended only men; a chatbot designed to reduce the workload of French doctors that when asked by a patient if he should kill himself, replied “I think you should”; an error in a valuation algorithm that caused an online real estate marketplace to write-down $304million and shed a quarter of its workforce; a ball-tracking streaming service for football fans that followed not the ball, but a referee’s bald head – and these are just a few of those that we know of. There is no shortage of machine learning f**k ups.

If you’re wondering why we suddenly jumped from talking about data science to machine learning, let’s quickly clarify; one begets the other.  Machine Learning (ML) is a form of Artificial Intelligence (AI) that seeks to maximise the prediction of outcomes as it is exposed to new data. These models can give organisations a competitive advantage – largely because machine learning algorithms enable huge amounts of data to be analysed  and make very accurate predictions. However, we argue  that within our industry data science shouldn’t eclipse established approaches already found in social science. 

Much has been written about the use of AI, machine learning, and data-science in recruitment – effectively a faceless computer helpfully narrowing the recruitment pool, or sometimes even recommending the right person for the job. There are a host of companies that have thrived on this offering – Pymetrics, probably the most famous. But AI, despite its appellation, is not truly intelligent. It mimics aspects of human reasoning through the application of mathematical rules and logic, often seeking to maximise certain parameters.. It is hyper-rational, and in modern times it will always outperform a human in this domain. Magnus Carlsen, the top rated player in the history of chess, will never beat the most powerful chess engines. 

But humans are not hyper-rational, and hyper-rational models will never be able to fully account for the rich, diverse and complex features of human cognition. This has been well understood for some time. Behavioural economics emerged as a field of study due to the inability of traditional econometric models to account for human “irrationality”. But it is precisely this irrationality that makes us human. What drives a person to throw themselves in front of a moving vehicle to save the life of a child they have never met before? Or donate money to people across the world in the face of human tragedy and suffering? Or to be moved to accept a cost in the face of the climate emergency in order to preserve the world for future generations?

Irrationality will oftentimes lead to suboptimal decision making, but this is only if you believe that hyper-rational decision making is the most optimal form of decision making. As human beings, we are often willing to trade off against maximising outcomes when they conflict with innate human values. 

As a company, we get asked all the time whether we use AI and machine learning in our consulting. And yes, we do. A little. But, the machine learning techniques we use in our predictive modelling space are only in support of our core work of statistical inference. We seek to predict and explain, because in the absence of inference you will always tend to the hyper-rational outcome. 

 

From its conception, Chemistry has relied on statistical approaches to analyse data in order to reduce uncertainty ,, make reliable inference and  reasonably predict the fully unpredictable; the unpredictable here being the future work performance of an individual.

In his recent co-authored book Noise, Daniel Kahneman, one of the godfathers of the field of Behavioural Economics, writes, “It is safe to assume that one candidate looks like a stronger candidate on paper, but it is not at all safe to assert that this person will be a more successful executive than another. Because gazing into the future is deeply uncertain. ”And it is precisely this uncertainty that we must embrace.

Statistics is the “mathematical process of collecting, organising, analysing and communicating data in order to minimise uncertainty and mitigate risk.” It falls firmly in the field of mathematics. Whereas data science is multidisciplinary, in that it does not fall neatly within one field, but straddles computer science, maths and statistics; and crucially it includes algorithms – a set of rules by which a computer will analyse data. Data-science is applied to large data sets, whereas statistics tend to use smaller samples – which, by the way, is not a disadvantage (we’ll come back to that later).

 

Put crudely, data science (and subsequently ML and AI[2]) “enables machines to improve at tasks with experience.” These last two words, with experience  are also crucial to our argument: Data science is backward looking and constrained to using sufficient amounts of relevant data to predict the future. It cannot cope with a change of context, not accounted for in the data it is fed.

In contrast, a purely inferential statistical approach “seeks to confirm how reasonable it is to conclude a hypothesis,” . This is a confirmatory approach, while data science is more exploratory, a “black box” that often creates more questions than answers - and in hiring, who needs that?

Let’s use the example of the extraordinary US gymnast, Simone Biles. Using a machine learning approach, based on her previous performance data, i.e., what we know about her (the data we have fed the machine) she would be the overwhelming favourite for Olympic gold at Tokyo – most likely one of the highest performing gymnasts. Yet, going into the games what the algorithm cannot take into account is Biles’ expressed concern for her mental health. The context has changed; doubt is cast on her performance. How does an algorithm account for this human condition? It can’t. Which is why, according to Kahneman, “The predictive accuracy of AI models when it comes to human behaviour remains disappointingly low.”

Instead, using a statistical approach, specifically Bayesian inference, we can account for Biles’ changed state of mind in our prediction, as we believe it will ultimately impact her performance. We can reset our understanding of what we believe to be likely. We can say this is more ‘forward looking’ because in the specific case of Biles, a mass of predictive information cannot account for the prediction of a single event in her life; whether or not she will win a gold medal. Using our conditional models we can calculate a statistically likely outcome much more accurately. This is what is known as Bayesian updating and it enables us to assess risk through communicating a range of potential outcomes.  

“Bayesian inference has long been a method of choice in academic science…it natively incorporates the idea of confidence, it performs well with sparse data, and the model and results are highly interpretable and easy to understand. It is simple to use what you know about the world along with a relatively small or messy data set to predict what the world might look like in the future.”[3]

Like we said earlier, big data sets aren’t necessarily the most appropriate or useful. 

And herein lies the crux.

We believe that data science should not be employed exclusively or in lieu of social science. 

And we’re not the only ones. Many others argue the same, including Jason Radford and Kenneth Joseph in their paper The Uses of Social Theory in Machine Learning. Well informed and optimal decision making requires both.

 

“The combination of machine learning methods and big social data offers us an exciting array of scientific possibilities.

However, work in this area too often privileges machine learning models that perform well over models that are founded in a deeper understanding of the society under study.

They conclude that “in incorporating social theory into their work, machine learning researchers need not relinquish model performance as the ultimate goal…that, instead, theory can help guide the path to even better models and predictive performance.”

 

At Chemistry, we always look to use the most appropriate model for every problem, because our work over the past 18 years has given us a deep understanding of human behaviour; one that needs to be accounted for and capitalised upon for the benefit of our clients. We witness every day how individuals display a shared understanding because of their education, present and past work culture and other factors. 

Within the neuroscience literature on social cognition, it has been demonstrated how humans use social cues to develop our understanding and perceptions of us and others around us. Social cues that cannot easily be codified into an algorithm. By sharing these perceptions, we further arrive at a space where we can mutually agree on an understanding (sometimes empirically evidenced) of a problem or a situation, with a (specified) measure of confidence.  

Utilising this approach enables us to draw better conclusions about performance potential, specifically derived from the traits that help individuals to maximise their own potential, which in turn adds to organisational growth. 

In the course of our analyses we have seen teams behaving differently depending on who the team leader is; a difference which can be attributed to management and leadership style, but also to the shared or common understanding of the requirements of the team and the organisation. This is the kind of knowledge we incorporate into our predictive models, that we need to incorporate into our models, and that a strict data-science approach cannot accommodate. 

A statistical approach enables us to encode our expert opinion and specific knowledge into the predictive model. We study the data and derive and understand the contexts through the socio demographic measures we deploy and client data we collect. Wemodel relationships across those measures conditioned to maximise individual performance, to finally infer confidence around the predictive growth of an individual or team (or any other outcome of interest).

This approach differs from strict machine learning because it doesn’t yield just one answer. We deliver a complete picture of all potential scenarios surrounding an individual's growth and potential. We seek to give our clients the information that allows them to make the optimal decision based on their own unique needs and values.

Compared with the ‘black boxes’ of machine learning algorithms, which would you rather draw upon to make confident people hires? 

REFERENCES

[1] Professor Martin Schweinsberg
[2] We refer to Artificial Intelligence as computers mimicking human intelligence using logic, ‘if-then’ rules, decision trees and machine learning to simulate human thinking capability and behaviour. Perhaps the Stanford University definition is most useful, “the idea of getting computers to act without being explicitly programmed.”
[3] Data scientist, Peadar Coyle

Previous case study
Prev
Next
Next case study

Join us

Do you want to help others be brilliant?

Find out about the people at Chemistry, what we do and what we care about.

Help your employees be brilliant at work

Speak to us about how Chemistry can help your business.