Thursday, 20 August 2009

Google Wave: Panacea or daydream?

I finally got around to watching the Google Wave developer preview video last night. I'm a great fan of any tool that helps people work better together. If you've not heard of Wave, or not had time to investigate, it feels to me like a hybrid of e-mail, instant messaging, Wikis and SubEthaEdit. Users can create new waves (documents/conversations/communications), make them available to others, and work on them. Wave manages to (surprisingly elegantly) bridge the gap between e-mail, instant messaging and wikis. When you edit a wave, the other person can see your changes as you make them, one character at a time. On the other hand, if they aren't online, the next time they come back online, they'll see your wave waiting for them. This is pretty difficult to describe, but beautiful to watch, and it scales. Watch the video to see what I mean, but suffice to say something which starts off feeling like an e-mail can transparently become a discussion and the reverse is just as true.

There's no doubt in my mind that the technology involved is amazing, but from my perspective, the most interesting thing about the video is that it makes the scale of Google's ambition clear. Google are pretty openly hinting that this thing could become a rival to, or even replace e-mail, IM, Wikis and a whole bunch of other collaboration approaches with a single unified solution. Read that sentence again. A replacement for e-mail; a protocol and metaphor for communication that's been around in more or less its present form since 1982. That's 27 years. 7 years before Tim Berners-Lee wrote his first proposal outlining the workings of the World Wide Web. Google are either seriously confident, or seriously arrogant. Or both.

But. They might just succeed. Unlike many other Web 2.0 services such as Twitter, Google are (at least outwardly) trying hard to ensure that Wave doesn't become a walled garden. Even services such as Google Sites, which offer integration with the outside world using standard protocols (in the case of sites through HTML linking and RSS) don't provide the same level of integration seen in the standardised protocols that support e-mail, IRC and other 'old school' services.

So, what makes Wave different? Google have built, and more importantly released to the public a protocol that allows any old Tom, Dick and Harry to create and implement a Wave server. Moreover, because the protocol is not trivial, Google have open sourced reference implementations of the protocol, and in the video suggest that they're intending to open source the majority of the code-base of Google Wave itself so that competitors can download, tweak and run their own competing Wave services. These services will all federate, and make the experience broadly seamless regardless of which provider you choose to use. Like E-mail, USENET and IRC, information is only sent to the servers supporting users actively involved in the wave, opening the possibility of the (perhaps justifiably) paranoid running their own organisational Wave servers to ensure that content only leaves the corporate network when it is actively shared with a third party. This approach potentially eliminates a major barrier to adoption in the commercial world. Lastly, Wave provides support for Robots (intelligent agents) that can accomplish a multitude of tasks. Google demonstrated Robots that did things like integrating with Google's blogger service and it seems clear this technology could be extended to support integration with existing communication mechanisms, and in particular the big threat: e-mail.

How this all pans out remains to be seen. Google are not an academic organisation, and they must deliver value for their shareholders, but it's fair to say that they have a history of taking relatively large risks by taking on large scale projects with no obvious revenue model that would scare your average VC witless. Despite this, they're still here, and still profitable. I think it's reasonable to say that there's an excellent chance that Wave the product will be a success. I'm much more sceptical about Wave the global infrastructure, due in part to the complexity of the technology and consequent barriers to entry for competitors, but mainly due to something much more human: Inertia.

Regardless of the success of the Wave platform, the debate Wave is likely to stimulate can only be a good thing. The Wave preview opens its doors on September 30 2009 to the next 100,000 users. I have my fingers crossed.

Friday, 20 March 2009

Lies, damned lies and statistics

A client I'm working with that the moment has been doing some work around calibrating an estimating model. This model is based on allocating deliverables passing through the project a number of 'points', based on their perceived complexity, and then creating a weighted estimate based on these points. We decided to calibrate the model using a more detailed estimate of a random sample of these deliverables.

A quick review of these figures yesterday revealed a strong correlation(around 0.85) between the number of points allocated to an item, and its resulting estimate. "Hurrah!", we said. Then we said: "This model is useful, and can give us a reasonable estimate of any given subset of the overall project and therefore in particular, of each of our planned iterations." Things were good. Then we twigged.

When we ran the calibration workshop, we asked people to estimate each deliverable. When we described these deliverables, we gave a brief description of the scope of the deliverable, and mentioned the number of points allocated to the deliverable. Nothing wrong with that, right? Well, we decided to do a little experiment, and re-ran the same test, with the same people, but a different set of deliverables. This time, we didn't tell them the number of points allocated to each deliverable.

The correlation was now 0.19. By most definitions, this means there is no correlation whatsoever. Our model is broken.

So, what's going on there? I think (and I'm no statistician) that we're seeing human nature at work. If you tell people something is twice as hard as something else, they're inclined to estimate it'll take roughly twice as long. If you estimate something is three times as hard, the estimate will be three times as long. When we estimate, we don't know we're doing this, as our gut (rather than our head) is doing the heavy lifting here - it's hard to apply a lot of intellectual muscle to something that's ill defined. Gut bases its decision on whatever information is easily available; in this case, someone just told us this thing is 'hard, twice as hard as the last thing', so the number we come up will start off roughly twice as high. If we get some information that makes us believe it's simpler than this, then we might try and adjust Gut's estimate downwards a little, but we'll likely never estimate it totally objectively after being told initially that it's 'hard'.

Thankfully, for us, this hasn't caused a problem. We're mainly interested in the overall averages rather than the specific estimates. We can still predict reasonably accurately the overall length of the project, even if we're a little out on the fine details. The lesson here is clear though: Be careful how much trust you put in these kinds of estimating excercise. They may not be as scientific or accurate a you first believe.