If this blog was running in Amazon’s EC2 service right now you wouldn’t be reading this. Amazon is hopefully at the tail end of a now 8 hour outage in their N. Virginia availability zone that began at about 4AM EST, or at least that’s when I started getting my error alert messages. After testing AWS last Fall for about four months I started the process of migrating a few dozen web sites and applications off of an aging infrastructure to reduce costs and take advantage of the performance and flexibility offered by Amazon’s Web Services platform. The downside to that strategy appears now to be Amazon themselves and what I expect is a failure on their part in both engineering and capacity planning. In the past six weeks there have been two separate EBS related issues resulting in multi-hour outages, today’s reaching the point of an unacceptable response. Having had similar problems with two other popular virtualized infrastructure based platform hosting providers, a.k.a. cloud hosting, is the real complexity and risk behind the cloud computing model bubbling up such that even someone as large and staffed as Amazon can’t successfully pull it off (yet)? Two strikes for Amazon this quarter, everyone knows what happens with a third strike. I’m wondering what the abandonment rate will be as a result of today’s fiasco, I know I’m looking at other options for my customers.