Document Actions

Cloud failures highlight the need for open standards and better planning

The Inquirer -  The recent spate of cloud service outages has highlighted the need for open clouds and that simply relying on numbers does not necessarily provide resilience.

Microsoft's Azure cloud service had a three hour outage earlier this week, while the G-Cloud, the UK government's stuttering cloud initiative, had its own hiccup, and micro-blogging web site Twitter also went dark. Yet cloud providers still promote their services as a reliable and cost-effective way to outsource services, while in truth migrating services to the cloud requires a complete redesign of a firm's infrastructure if it is to be reliable.

Firms looking to move to the cloud are usually bombarded with buzzwords like elastic on-demand capacity, economies of scale and all sorts of other things that make the pinstriped decision makers see pound signs instead of warning signs. To rely on a single cloud service provider is a fool's paradise and to port an existing infrastructure built on the assumption that servers are highly available and well provisioned to the cloud is simply foolish.

The fact is that Amazon, Google, Microsoft, Rackspace and just about every other cloud service provider out there is trying to maximise the use of its resources. This simple business practice should ring alarm bells and users should not treat cloud instances as like-for-like equivalents to physical server deployments.

So the simple answer would be to have a backup strategy, multiple deployments and partitioning of services. The theory is of course absolutely right but there are two significant problems - ensuring portability and managing for seamless failover.