It’s been a little quiet here on the blog for a while as I have been busy with OU course TU811, Thinking strategically: systems tools for managing change which has taken up pretty much every free moment & put paid to any thoughts of flying my little airplane.
Now that all our TU811 work is submitted, the course online forum is pretty quiet except for a small quorum of us discussing the course content retrospectively. Some felt the content was rather superficial, the course presents five mainstream systems tools in parallel with a stream about thinking styles. Personally I felt that the breadth was good as it presented diversity in methods, and I like being able to choose from a canvas when I’m problem solving, and then modify & apply the method, “praxis” as introduced in the previous TU812 module.
One tool presented at the end of the module was Werner Ulrich’s Critical Systems Heuristics (CSH) which on face value looks a bit like a checklist presented as a matrix, but what it actually does is to make you think comprehensively about a situation from four different perspectives as my paraphrasing below:
- why is it that we’re doing whatever it is we want to do
- where is the centre of control for this, in other words if we try & do it is there a barrier stopping us
- who actually understands what it is we’re trying to do & can they help us
- who could be affected badly when we do what we’re trying to do
A lot of this is used in a social context but I got thinking about a couple of electronics problems from my past and how the decision style represented by CSH actually played a part even without us knowing about it…..
We’re used these days to buying a computer, taking it home & simply connecting it to a high-speed network, usually wi-fi as delivered by our broadband contract. Back in the 1980s this “Ethernet” network thing was just emerging from the Palo Alto Research Centre at Xerox, and we used “serial” wires to link terminals to computers, or even computers to computers (I was amazed on a trip to computer manufacturer CCI in California that they had all of their computers linked together using serial cables). These links were slow, and implemented using pretty simple electrical cable. When I went to work for Systime in 1986, they had a huge serial switch so from my desk I could connect to various computers, albeit one at a time from my terminal, very high-tech.
One problem with simple electrical cable, and slow speed data signals, is that it is very easily affected by external electrical noise, we still get some of that today when you are talking to someone on a telephone and their Blackberry is sat close by on the desk, you can hear the Blackberry doing it’s data transfers and it’s very annoying. In a “noisy” environment it was definitely the case that what was sent from the computer definitely wasn’t received at the other end.
Back in 1982 at a development centre in ICL, some colleagues on a different project to the one I was working on had a bad noise problem. Being a group of software developers they decided to fix the noise problem using software, they simply read the data value being sent down the wire three times and if they had two or three matches, used that. Ironically it was an apprentice from the manufacturing plant up the road who saw what they were doing, metered out the cable & showed that it wasn’t earthed correctly. Using a correct cable stopped all of the faults.
It was towards the end of the 80’s when I saw a similar problem. This time it was at an installation at a builders yard near Oxford. A serial cable had been strung over to a shed from where they processed orders for sand, grit & the like. This time the problem was almost unearthly, the signal coming down the wire kept repeating, in digital form, “mknoona….mknoona…”. And, yes in this case the cable was correctly earthed so the factor causing the problem was something external & significant. We ended up putting up a strongly shielded cable.
So what do these two problems have to do with Critical Systems Heuristics, if anything? In both cases what we wanted to do was to stop the spurious signals from coming down the wire, and it was easy to measure success: either the extracurricular rubbish stopped or it didn’t.
It was in the second step, looking at the controls, where the first case failed & the second worked well. At the development labs the team simply looked at themselves as the source of control, and of knowledge, and simply coded another fault into the system. They didn’t look at all of the sources of control (what could be done to stop the noise ingress) or knowledge (who could help with noise ingress) which is why a 17 year old apprentice showed them up. Furthermore no one thought about who would be impacted by the software change, in years to come someone would find their daft “fix” and have to puzzle it out, or worse still remove it as a common sense action and then find that things crashed around them.
In the builders yard case, it was quickly decided that the control of the external noise just couldn’t be found, and therefore the decision was taken to employ knowledge to institute a fix through shielding. I still have my suspicion that the noise was coming from power lines nearby but we could never prove it. Nothing was altered other than the cable & no-one was affected.
Back in May I wrote about making System Design Systemic and concluded that I had taken a soft technique from systems and applied to something much harder. I raise this because that is what I am doing here again. So it is an old example, but it shows that we can take systems thinking ideas & methodologies, and apply them to our harder technology world. And if it helps to make what we do more effective, and more consistent then is that a bad thing?