Wouldn't It Be Nice If

Ways of thinking about the world when technology and business in collide

Do we need a “third way” in Systems?

Heresy or well meaning intervention?

A couple of days ago I read the article: Machine Behavior Needs to Be an Academic Discipline by Iyad Rahwan & Manuel Cebrian. The piece asks if we need to consider new ways of thinking about what we are potentially creating with our developments in Artificial Intelligence , or “AI”.

Before I discuss the article let me consider the introduction to the late Donella Meadows book: Thinking in Systems, where she talks about some of the thinking patterns generated when applying systems thinking which may be considered by those of a more traditional outlook as being “Heretical”. As such I hope that this post is seen as neither “heretical”, nor even “luddite”, but an attempt at articulating a problem which has hitherto been left unclear.

Systems of AI”

forward-3210940_640For a few years now we’ve seen high profile denouncements of AI by the likes of Elon Musk, and the late Stephen Hawking. Many of these have been reported at a very high level with little view expressed as to what these influential people actually thought the problems we would be facing could be. Some recognise that AI could replace humans, but how and when?

Looking at the current state of AI, it is mainly being used to augment people. By way of example I’m presently looking at the possibility of using AI to scan many thousands of unstructured documents and propose meta-data values to reduce the effort required by people simply reading each and every document and filling in tables. I’ll also point to autonomous vehicles as an exception here, but they are probably further from being mainstream that my document interpretation project.

In contrast, Rahwan & Cebrian are looking much further forward to when we have handed over trust to AI and left AI components autonomously running business processes. Not only that, but to a point where multiple AI entities are interacting with each other autonomously, in larger systems, lets call them “Systems of AI” for lack of a better name.

Given these “Systems of AI” running complex processes, the authors of the article point out that we face emergent properties of these “systems of AI” that we do not have the ability to anticipate, hence the request for some new discipline that helps us to untangle this possibility.

Doesn’t this happen today?

It can be argued that today we have automated systems communicating with each other, without human intervention. Even my car has multiple computers interacting to keep it running, and modern fly-by-wire aircraft also provide a good example. However these are deterministic systems, and given enough compute power we should be able to predict all possible states that such systems could reach, and thus evaluate the potential properties of such systems. This is largely the realm of Systems Engineering and lies at one end of the spectrum of our understanding such systems.

By the way I’m ignoring the computational complexity required to precisely model much bigger systems such as the traffic management of a large city, or an automated regional air traffic control, and instead I’m just working in a theoretical world. Realistically we could model these systems in lower detail than reality but still be able to work out likely states with reasonable accuracy.

Non-predictable systems

At the other end of the spectrum, where we are largely dealing with real humans we can look to Systems Thinking approaches such as inquiry and action research, which apply well to non-deterministic systems when we need to understand them in order to try and change things.

By definition we cannot predict all of the possible states and outcomes for such systems. Having said that, disciplines like behavioural economics and social psychology are showing us ways that we could potentially start to predict how human systems components may behave when they interact and create models with reasonable degrees of accuracy.

So? What’s the Problem?

Where I echo the concerns of Rahwan & Cebrian is that any complex systems comprising multiple, independent AI components may not be deterministic. I’m thinking of some areas of FinTech where already we use learning algorithms for trading that are now operating such that nobody really understands why the AI components decisions are made are as they are.

Taking the ideas about observing systems as described in Gerald Weinberg’s book General Systems Thinking, then we could view such AI components as black boxes and suggest component behaviour based on how we observe inputs and outputs. However as the behaviour of the AI systems changed through learning as in our FinTech example, our observations would become out of date. Systems Engineering as it is would not work due to the changeability of AI systems.

However if we look at the AI components both singularly and in combination, they currently do not embody cognition or have agency, and thus we cannot use any of human predictors as mentioned above, nor can we do much at all to appreciate the AI models.

The issue here is that potentially we are building systems, or combinations of human and “Systems of AI”, that defy our current tools’ ability to understand what those systems could be doing, and thus our ability to predict how they will work, and what the emergent properties could be.

What could we do?

My suspicion is that this is at the heart of the concerns raised by Elon Musk, Stephen Hawking et al. Human nature cuts in here, and we are seeing calls to regulate AI but frankly if we don’t know where it is going then how do we know where to regulate. And this is where we could start to look like heretics and luddites.

Conversely we could say that as the problem lies between Systems Engineering where we have a strong tradition,
and Systems Thinking where we are growing our capabilities, then we should have it covered anyway, but hopefully I’ve shown above why this may well not be the case.

As such I think Rahwan & Cebrian are right to call for some sort of academic discipline to be established. Where I differ is that I don’t think it is a new discipline as they conclude, but that we do what Systems Thinking has been doing for nearly a century 
forward-3274954_640

now, and pull together cross-discipline study which means expanding the remit of what is already there, inter alia introducing the technology aspects.

Building predictors of “Systems of AI” behaviour will be extremely difficult, but what I suggest we may be able to do is to build models of observed behaviour which we then use to provide regulatory monitoring. As such we’ll regulate as we go, and build meta-regulatory frameworks. All this may sound a little over complicated, but if we increase the variety of what we do, then the Law of Requisite Variety has to hold and so we need more complex management controls.

We can do it, but we need to start now. Any takers?

Leave a comment