Thursday 20 January 2011

When 'buy before build' goes bad

It's pretty much a universal principle in a large enterprise: If you want something new, first you try to reuse, then you try and buy something, and only as a last resort do you build something yourself.

Sound enough, right? It ought to be cheaper to buy something from a specialist vendor than work it out and build it from scratch yourself. They get the economies of scale, you get a reduced price, everyone's a winner.

The only trouble is, sometimes buy before build sucks.

The trouble is that usually when you buy, you're making an often large up front financial commitment to something. Not only that, but often we buy something before we've had a chance to really work out what it is we need. So, we end up buying the uber product - something that delivers our every whim and desire.

Very often, when it comes down to it, we buy 100%, use 20%, and wind up bespoking the living daylights out of the rest. The Vincent van Gogh of our vision becomes more like an HR Geiger Alien. It costs as much to customise as it would have cost to build, and far from being a virtual utopia becomes the treacle holding us back.

So, how do you make sure this doesn't happen to you? Simple: Only buy what you're absolutely sure you need.

How do you know what you really need? Simple: Build it and see how your users use it, rework, repeat.

Sometimes life has its little ironies...

Tuesday 16 November 2010

Werner Vogels on Amazon.com, AWS and SOA

I am way out of date here, but I've just stumbled across an interview between Information Week and Amazon CTO Werner Vogels. The interview is from way back in 2008, but many of the things discussed are just as valid (and in some senses revolutionary) now as they were then.

Touching on a number of interesting bits of information about Amazon's architecture, the interview talks about how Amazon came to become the cloud computing 'thought leader' they are today.

Amazon have been doing SOA highly successfully for nearly a decade, and as demonstrated by their dominant position in internet retail (2009 revenues topping $24 billion and a market cap of over $70 billion at time of writing).

What particularly caught my eye about this article is how it aligns with my own views about what makes good SOA tick:

"It's not just an architectural model, it's also organizational. Each service has a team associated with it that takes the reliability of that service and is responsible for the innovation of that service. So if you're the team that's responsible for that Listmania widget, then it's your task to innovate and make that one better."
To me, making SOA work is more about people and organisation structures than it is about technology. Build the right teams and the technology will come. Focus on the technology and your organisation will just hold you back.

From there, Werner goes on to talk about how they evolved the AWS cloud computing platform out of their own need for highly resilient distributed infrastructure, and of course how they exposed this too as services. To put this in perspective, this 'spin off' is projected to earn Amazon over half a billion dollars in 2010. Not bad for a by-product.

Read the article. Even 2 years late, it's well worth 15 minutes of your time.

Friday 11 June 2010

IBM UK Impact 2010

On Tuesday I dropped in on IBM's UK Impact 2010 conference in London. UK Impact is effectively a pocket-sized version of the 5-day, 6000 attendee Las Vegas Impact event held last month.

The Conference was well organised, held in the (rather swanky) Grange Hotel St Paul's.

This was a one-day event, with the morning being single track, and the afternoon multi-track.

As you'd expect given IBM's recent 'Smart' branding (imitation is the highest form of flattery), the Keynote was a startlingly on-message presentation entitled "How your Organization Can Work Smarter".

There were some interesting gems in there: Did you know for example that certain electrical companies in the states give customers a circa $300 annual rebate in return for handing over the keys to their air conditioning to the electrical company? In times of peak demand (but not when it's health-threateningly hot), rather than power up another turbine, they'll start a rolling programme of AC shut-downs to reduce demand. That is smart.

A fun set of statistics for you: Among the businesses run by the top 500 CIOs, compared to other businesses there is:

  • double the usage of process modelling and automation technology.
  • 3.75 times greater usage of collaborative workspaces.
  • 9 times greater usage of SOA.

... of course, how you define 'top' CIOs or 'greater usage of SOA' is potentially subject to interpretation!

A key message, which I really buy into is that 'excellence is a moving target'. It's easy to be complacent when you're at the top of your game. What (arguably) separates the likes of Google and Apple from Microsoft is their ability to know what the customer wants, before the customer knows it themselves. Doing this, obviously, requires an ability to innovate at speed and change on a dime.

To me, the only way to achieve this is to keep everything as simple as possible at all times. If your IT is so complicated that your business can't wrap their heads around it, is it any wonder you struggle to keep up with their demands? Sometimes the best investment you can make is one that leaves you with less than you started with, particularly if it makes your IT look more like the business it serves.

Thursday 20 August 2009

Google Wave: Panacea or daydream?

I finally got around to watching the Google Wave developer preview video last night. I'm a great fan of any tool that helps people work better together. If you've not heard of Wave, or not had time to investigate, it feels to me like a hybrid of e-mail, instant messaging, Wikis and SubEthaEdit. Users can create new waves (documents/conversations/communications), make them available to others, and work on them. Wave manages to (surprisingly elegantly) bridge the gap between e-mail, instant messaging and wikis. When you edit a wave, the other person can see your changes as you make them, one character at a time. On the other hand, if they aren't online, the next time they come back online, they'll see your wave waiting for them. This is pretty difficult to describe, but beautiful to watch, and it scales. Watch the video to see what I mean, but suffice to say something which starts off feeling like an e-mail can transparently become a discussion and the reverse is just as true.

There's no doubt in my mind that the technology involved is amazing, but from my perspective, the most interesting thing about the video is that it makes the scale of Google's ambition clear. Google are pretty openly hinting that this thing could become a rival to, or even replace e-mail, IM, Wikis and a whole bunch of other collaboration approaches with a single unified solution. Read that sentence again. A replacement for e-mail; a protocol and metaphor for communication that's been around in more or less its present form since 1982. That's 27 years. 7 years before Tim Berners-Lee wrote his first proposal outlining the workings of the World Wide Web. Google are either seriously confident, or seriously arrogant. Or both.

But. They might just succeed. Unlike many other Web 2.0 services such as Twitter, Google are (at least outwardly) trying hard to ensure that Wave doesn't become a walled garden. Even services such as Google Sites, which offer integration with the outside world using standard protocols (in the case of sites through HTML linking and RSS) don't provide the same level of integration seen in the standardised protocols that support e-mail, IRC and other 'old school' services.

So, what makes Wave different? Google have built, and more importantly released to the public a protocol that allows any old Tom, Dick and Harry to create and implement a Wave server. Moreover, because the protocol is not trivial, Google have open sourced reference implementations of the protocol, and in the video suggest that they're intending to open source the majority of the code-base of Google Wave itself so that competitors can download, tweak and run their own competing Wave services. These services will all federate, and make the experience broadly seamless regardless of which provider you choose to use. Like E-mail, USENET and IRC, information is only sent to the servers supporting users actively involved in the wave, opening the possibility of the (perhaps justifiably) paranoid running their own organisational Wave servers to ensure that content only leaves the corporate network when it is actively shared with a third party. This approach potentially eliminates a major barrier to adoption in the commercial world. Lastly, Wave provides support for Robots (intelligent agents) that can accomplish a multitude of tasks. Google demonstrated Robots that did things like integrating with Google's blogger service and it seems clear this technology could be extended to support integration with existing communication mechanisms, and in particular the big threat: e-mail.

How this all pans out remains to be seen. Google are not an academic organisation, and they must deliver value for their shareholders, but it's fair to say that they have a history of taking relatively large risks by taking on large scale projects with no obvious revenue model that would scare your average VC witless. Despite this, they're still here, and still profitable. I think it's reasonable to say that there's an excellent chance that Wave the product will be a success. I'm much more sceptical about Wave the global infrastructure, due in part to the complexity of the technology and consequent barriers to entry for competitors, but mainly due to something much more human: Inertia.

Regardless of the success of the Wave platform, the debate Wave is likely to stimulate can only be a good thing. The Wave preview opens its doors on September 30 2009 to the next 100,000 users. I have my fingers crossed.

Friday 20 March 2009

Lies, damned lies and statistics

A client I'm working with that the moment has been doing some work around calibrating an estimating model. This model is based on allocating deliverables passing through the project a number of 'points', based on their perceived complexity, and then creating a weighted estimate based on these points. We decided to calibrate the model using a more detailed estimate of a random sample of these deliverables.

A quick review of these figures yesterday revealed a strong correlation(around 0.85) between the number of points allocated to an item, and its resulting estimate. "Hurrah!", we said. Then we said: "This model is useful, and can give us a reasonable estimate of any given subset of the overall project and therefore in particular, of each of our planned iterations." Things were good. Then we twigged.

When we ran the calibration workshop, we asked people to estimate each deliverable. When we described these deliverables, we gave a brief description of the scope of the deliverable, and mentioned the number of points allocated to the deliverable. Nothing wrong with that, right? Well, we decided to do a little experiment, and re-ran the same test, with the same people, but a different set of deliverables. This time, we didn't tell them the number of points allocated to each deliverable.

The correlation was now 0.19. By most definitions, this means there is no correlation whatsoever. Our model is broken.

So, what's going on there? I think (and I'm no statistician) that we're seeing human nature at work. If you tell people something is twice as hard as something else, they're inclined to estimate it'll take roughly twice as long. If you estimate something is three times as hard, the estimate will be three times as long. When we estimate, we don't know we're doing this, as our gut (rather than our head) is doing the heavy lifting here - it's hard to apply a lot of intellectual muscle to something that's ill defined. Gut bases its decision on whatever information is easily available; in this case, someone just told us this thing is 'hard, twice as hard as the last thing', so the number we come up will start off roughly twice as high. If we get some information that makes us believe it's simpler than this, then we might try and adjust Gut's estimate downwards a little, but we'll likely never estimate it totally objectively after being told initially that it's 'hard'.

Thankfully, for us, this hasn't caused a problem. We're mainly interested in the overall averages rather than the specific estimates. We can still predict reasonably accurately the overall length of the project, even if we're a little out on the fine details. The lesson here is clear though: Be careful how much trust you put in these kinds of estimating excercise. They may not be as scientific or accurate a you first believe.

Wednesday 22 October 2008

Can SOA governance technology be distracting?

In a recent post, David Linthicum asks "Can SOA governance technology be distracting?". His answer is yes, and he offers the following sound advice:

First, only purchase SOA governance technology, if it's indeed needed, after you have a complete semantic-, service-, and process-level understanding of the problem domain. Never before.

Amen to that. In my opinion, for all but the most mature and involved environments, the procurement of an SOA governance platform should be well down the list of priorities. I'd add to David's list of things that need to be 'worked out' before you get that cheque book out:

  • What is your vision for governance itself? Do you want to adopt a 'iron fist' or 'hand in glove' approach? Is your registry going to be a mechanism for governing or a side effect of it?
  • Who's going to populate it? Have you got your analysis, design and development processes sufficiently honed that your repository isn't going to turn into a dumping ground of candidate services?
  • Have you actually got any services live yet? Governance is a whole lifecycle thing. Until you've worked out how you're going to deploy and manage services in the production environment and demonstrated that this works, how do you know what capabilities your governance platform needs to offer?
  • Most importantly: What are the use cases for your governance platform? Can you demonstrate that these use cases can't be addressed using your existing tooling (even if that's Microsoft Excel)? Be honest with yourself about when you're likely to implement these use cases. If the answer is further than one year away, then for the time, you might be wise to forget them. There is little point in spending good money on runtime governance or automated deployment technology when in a year's time you'll be able to get more for less.

A lot of projects using SOA governance tools at the moment treat them as glorified databases. If that's where you're at, consider using something less specialised that allows you to evolve your ideas, understanding and schema before you commit to something that will make this innovation harder and more time consuming. When you've spent six to twelve months getting your ducks in a row, so to speak, you'll be in a much better place to make decisions.

I'd really welcome stories from people about how they've implemented governance platforms in the past, whether they're informal (e.g. Wikis, bugtrackers, spreadsheets) or formal (e.g. IBM Websphere Registry and Repository, CentraSite from Software AG): What did you implement? What worked? What didn't? What would you do differently next time?

Sunday 10 August 2008

Choosing an Open Source 'ESB' technology

I'm currently working to find the best Enterprise Service Bus for a project. Nothing unusual there. Something I've done a few times before. Except this time the requirements are a little more unusual than which kinds of transformation the tool supports.

Functional requirements:

  1. The ESB must fit with Smart421's SOA patterns.
  2. The ESB need not be a one-box solution. We're happy to mix and match tools around the outside. BPEL in particular is not necessarily a mandatory part of the core product.

On the face of it, not too tricky. In practice, maybe a little harder: In our view, the ESB isn't a piece of software in the first place, it's a collection of (continuously changing) standards and policies that govern the interactions that take place across the essentially empty void between two services, so we're looking for something that's compatible with that view, rather than something that wants to sit at the centre of the SOA universe (more on this later, perhaps). Let's look at the non-functionals:

  1. The ESB must be Open Source, be based on an Open Source product, or there must be an Open Source version available.
  2. The ESB must have an established presence in the market - the latest and greatest features aren't enough. We're looking for something that has some industry buy-in.
  3. It must be possible to buy in support for the product from a third party, should we need it.
  4. The ESB must support light weight development. We must be convinced that easy things are easy achieve, and hard things are proportionally (and not disproportionately) harder.
  5. The ESB must offer a non-functional envelope that allows it to support a large scale enterprise application, preferably without restricting us to vertical scaling.

I'm intending to follow a fairly standard product procurement process to help select the technology, so the next step is to set some more formal selection criteria and identify some candidate products to form the ESB core.

At the moment, the obvious candidates for me are:

I'll keep you posted as I progress... Comments/suggestions/vitriol welcome!