Tag Archives: Vendor

Was the Terminator so wrong after all?

I get called all sorts of things by friends; colleagues; fellow OU students etc, but a common description I get applied to me is “random“. Personally I prefer to think of it not so much as “being random”, as more making connections between things that I come across. It’s just that sometimes I don’t make all of the connections explicit which is why some of my thoughts may be hard to follow.

Anyway, I saw something recently that, for me, clicked into place and frankly worried me a lot. So, below is my attempt to bring to bear the connections and hopefully elaborate on what concerned me so much. Here goes…

Thought # 1

I must start by admitting to having made a bit of a mistake. For several years I have disregarded friends & relatives who have, to my mind, been overzealous in their protestations that social networking is insecure and betrays their privacy. My argument has been that if you (a) want to be an early adopter, and (b) are happy anyway to share whatever you share on these sites, then you have only yourself to blame if your privacy is breached.

Over time, Facebook, Twitter et al, have all moved from being something that was the preserve of our young children to being pervasive. As I drive to work in the morning I pass advertisment hordings which include Twitter & Facebook monikers, the BBC news reporters proudly have their Twitter names displayed. Familiarity has made these things acceptable to us all, and now we all entrust our thoughts, photographs and comments to them with impunity.

Thought # 2

From time to time I read about more developments from Intel (other CPU manufacturers are available!) about the research efforts to squeeze multiple CPU cores onto a wafer & deploy them to get higher & higher levels of compute power. The efforts, called Intel Tera-Scale are steadily making progress. Parallel computing is nothing new, I wrote a research paper as an undergraduate back in 1981 on methods for parallel computing, but then it was all in the domain of the large corporations and academic research. Now you can watch videos of Intel researchers doing mundane things with 80-core processors: porting Linux, running graphical networking tools. How long will it be before I upgrade my PC to a 64-core I wonder.

Thought # 3

A piece in SmartPlanet called Meet your new boss: a machine. In essence a Gartner Analyst called Nigel Rayner was making some rather off-the-wall claims. Now I’ve met Nigel a couple of times, and he’s a good guy, so his claims are more to provoke thoughts but he’s saying that we’re now developing decision rule capabilities in software that allow us to take multiple inputs, crash them together and evaluate complex rules about them so that the software can take very complex decisions quickly. [Notice I say “take” not “make” – the rules are still followed automatically, so we’re not at SkyNet yet]

The problem is that the software needs a mass of compute power, which many years ago (when I was an undergraduate writing my paper) would have been multimillion dollar stuff. Nigel’s view is that we’re making hardware cheaper and commoditising the parallel processing software which is starting to put this sort of complex decision-making not only into the realm of the possible, but also into the realm of the affordable.

First “Ahhh” Moment

But our friends at Intel research (and frankly other places too) are developing multiprocessor silicon and parallel processing software to drive into the commodity market. It’s now not whimsy to ask “When can I have a 64/128/256-core PC?”. This stuff is coming and we’re seeing the large software companies working on applications around complex business rules.

Thought # 4

There is a new trend which I’m finding disturbing. Today I was looking at an innocuous web site, the San Jose Mercury News tech pages (yes, laugh a minute stuff but very useful in my line of work) when I noticed that it had pulled through my Facebook details, inviting me to like the page. My concern was that I was at work, on a PC that I only use on Facebook very occasionally, yet SJMN had found the necessary on my PC, & then read my Facebook profile. Thank goodness it hadn’t found my LinkedIn profile.

The “Arghhh” Moment

But what if it had? What if the SJMN, or any other online journal, decided to look at my LinkedIn connections, and the article I was looking at and then automatically sent them a link to it on the grounds that if I was interested, then they might be?

Or what if it had checked the connections for a certain demographic, and then intelligently forwarded to my contacts it felt were worthy of the content, creating a set of “cyber haves” and “cyber have-nots”.

How about it isn’t a media site but a product company? And the product company is automatically trawling my social network to see who I know that would be useful for them to approach based on decisions, from a complex rules engine based on commoditised multi-cpu processors that are now available.

Or better still, how about it measures how influential I am from how well I am connected and then uses some complex business rules, again enacted via some large multi-core parallel processing to decide how worthy I am of seeing the site content? Now I’m becoming the one categorised as “cyber-worthy” or “cyber unworthy”.

To be honest, I’m still not too worried about this until I get to the last point. Some of that capability will be with us soon and some, as Nigel Rayner says, will take longer. In one sense, I’m not too fazed by this march of progress, as long as the machines aren’t “thinking” then there are benefits to be had.

What scares the wits out of me is the social change that this brings about. I’m already modifying my behaviour around software. I used to turn on the TV or radio and they came on immediately, but now in the digital world they need time to boot up and initialise. I now know not to press buttons on the remote control until the TV has had time to initialise the various functions, my behaviour is influenced by the software in the TV.

But if I get into the wider implications of software making decisions based around my long-term online lifestyle, then I’m going to start modifying how I work with social networking etc so that I change my ratings. And furthermore the few who stay away from this stuff will rapidly be penalised.

We already have skills based accreditations around technicla products. I can be a very clever CISCO implementer with a vast experience, but if I haven’t passed the exams then my value on the jobs market is reduced. What we are potentially talking about here is that my value could also be reduced if I don’t know, or worse still don’t have recommendations from, the right people.

So, the social network privacy issues become more important as we go forward. I can’t afford to be public, but soon I could end up not being able to afford not to be. I’m not convinced it’s a good thing….


Why did you do that?

One part of my day to day life is that I research into areas and try to make sense of them, often against the clock. I’m either good at it, or the fool, because I’m frequently asked to do it.

To be honest “making sense of them” actually means deciding if what I’m looking at is suitable & beneficial, be it a technology or solution that my organisation could derive value from, or on the OU module I’m currently taking it’s more about how I can usefully use what I am learning about.

The thing about trying to make sense of something of interest is that its not just understanding it and how useful it could be, its about being able to articulate that to others, and it’s only when you try to communicate your understanding that you actually realise how little you actually do understand.

It’s interesting when presenting to others to see the various prejudices that are carried along with peoples thinking, which is generally made visible in the challenges that come back. Geoffrey Vickers came up with the idea of the Appreciative System which really applied to understanding why people think the way they do, which in turn means looking at their past and what events and values have shaped that view (their “Appreciative Setting”). So run back in history & see the key events and learning that someone, or a group, has been through and you can start to understand why they are thinking like they are and thus why you’re getting the specific challenges.

I’d suggest the same methodology would be well applied to some artifcacts as well. When I analyse a solution or technology, I often find things that at first glance leave me asking “Why on earth is it like that?”. A good example here is a project I’m working on around a major ERP deployment. One option we have is very mature with a large market presence running into hundreds of thousands of sites worldwide. Look underneath the covers at the way it works in some places, especially the integration between modules, and you find a regular muddle that makes little or no sense at first glance, indeed it almost suggests a lack of architectural governance. And speaking of the architecture, it also looks very strange in these days of objects and service oriented architecture.

Of course, at that point trying to present this to colleagues I got the same responses from them, and these unanswered challenges leave a feeling that doesn’t help with trying to present a benefits case. You end up with a senior exec saying “How can it be good for my business when everybody around you has these questions (that I don’t actually understand) that you can’t answer?”

Now try to use something like an appreciative system and run back through history – the integrations didn’t all come about at once, they appeared over time and in some cases represent the state of the art at that moment. Secondly they were probably developed in phases, so some represent must-haves that had to delivered in a first wave as quickly as could be done, later phases probably had the benefit of more time. And when you have a global base of that size, the cost & risk of rewriting older technology code to a newer technology just doesn’t make sense. These are conjecture on my part initially, but I can now ask questions to confirm or disaffirm my thinking.

I had a similar experience looking recently at the C++ language, based on some technical developments being done at Microsoft that could impact our development team. As a BCPL man myself, C++ looks way OTT: types on variables? objects? the referential model? Yuk! But then I found a paper written by Bjarne Stroustrup about the background and the design decisions made when evolving C++ and a lot became much clearer, as did positioning why you would want to use C++ in soem case and not in others.

In both cases, knowing why something has been delivered the way it has helps me to put it into context. No longer am I looking at face value and making decisions on that alone, but by tracking the past decisions I get an understanding of what underlies how what I’m looking at came to be, and that way I can make more sense of it.

The problem then comes in making compromises, as a strategist I want to reduce complexity and go for purity wherever possible. Yet in understanding as much of “Why?” as I can, I then understand where and why I need to make compromises on that. Fundumentally I move the messy problem from the technical domain over towards the business domain which can only be a good thing

Vendors – a word from a poacher turned gamekeeper

Nearly five years ago I gave up working for vendors, having been on the supply side of the IT chain since leaving university in 1982, and moved to the “dark side” by joining an end-user organisation. To be honest, it wasn’t a difficult transition but it is also a constant learning experience.

In the past week the differences between the supply side vendor view of the world, and the demand side end-user world have been bought sharply into focus. As part of a significant project I’m involved in, we have taken out one option and I have had several conversations with a major software organisation and some of its business partners to explain the decision.

I’m not the first person to comment on this, James Gardner(*) in his blog wrote a couple of articles about his view of the vendor approach and especially what he called “Proof of value”. Pity a few vendors haven’t read it.

For me, the biggest issue that the vendors I deal with don’t seem to be able to get to grips with is how end-user projects are costed. Before we go anywhere on a project the rule of thumb we apply is that the vendor costs will be one-third of the total project cost, and that isn’t including cost of ownership. Yep, that’s right: you sell me a £250k licence and I’m looking at a project of about £750k, and as we do the detail to get to business case the rule of thumb tends to be on the low side dependent upon the amount of transformation around the project.

So what is it I’m trying to articulate here? Well, I’d say there are a few points I tried to get over in my conversations this week:

  1. Reducing the product or licence cost is always good, but it rarely makes or breaks the total project: As we move towards product selection, we frequently get into a price negotiation. That is natural, but understand that making a 10% or 20% gesture is welcome, it is only a small part of the total cost. If it’s going to take 500 man days to implement your project then you can do what you like with the licence, it’s not going to reduce the implementation cost. And that leads to the next point…
  2. Implementation does cost: My organisation, like many others, does have to put implementation and other services into the financial reports. I often hear of talk about “funny money”, in one way it is but as far as our accounting team are concerned, it’s not funny at all, it’s quite real.
  3. It’s not the base cost, it’s the options: If I’ve learned anything, it is that the little things around the project that add-up; just like putting the options on your new car. Powerful servers are not that expensive relatively speaking, but start thinking about how many you need if you are to provide a highly available service and the costs rack up….. and we do need to provide a highly available service these days, in fact we’re now having to specify for 24×7 on services that previously wouldn’t have needed that level of SLA.
  4. Proof of Value is crucial: Given the size of some projects, we have to justify them to our governance organisations. If I am committing to a large spend on licences & people, then I need to be able to have mitigated any risk. The larger the project, then the larger the risk, and I cannot offset any potential risk against slideware. I’m not saying that I need a vendor to jump through hoops of fire here, just to understand the size of the project and thus the level of risk their product represents as a proportion of that project.

To be honest I have some personal beefs as well, but I’ll save them for another time.

The conversations I had this week were all based around the four items above taken in-toto. Jointly, yes we do like to work in partnership here, we couldn’t prove the value of one proposition which is why we had to discount it. In every discussion I had, the vendors kept going back to the cost elements and not looking at the whole.

Guys, I’m here to discuss this further if you want to…..

(*) James also wrote about Crowd Sourcing Strategic decisions. I followed this up with him directly & he was extremely generous with his time & experience, a great example of how we should treat each other in business. I am very grateful to him for this.