- User Control – Today, most IaaS providers closely manage companies’ cloud deployments and data, locking them into agreements that limit their data access and placing restrictions on their software and networking. With flexibility demands on the rise, in 2012, providers will have to accommodate companies’ desire for complete data portability where they have access to and control over all of their data.
- Resources – Unlike today’s typical bundled resources, in 2012, customers will increasingly demand the purchasing efficiencies of unbundled resources, allowing the purchasing of CPU, RAM and storage in the exact quantities required. With such a system in place, companies can customize their resource purchasing without concerns of over provisioning.
- Deployments – Currently, when migrating to the cloud, many companies are forced to change their operating system or software to accommodate the provider’s restrictions. In 2012, lifting restrictions on operating systems and application deployments will be something many customers will look for as this will gives enterprises not only the flexibility, but the confidence to move away from their proprietary infrastructure and into the cloud.
The cloud was initially conceived to be a more flexible, scalable and accessible IT environment. In order to stay aligned with that model in 2012 and beyond, restrictive cloud providers will need to strip away their limitations on resources, deployments and user control. Achieving a completely customizable and flexible IaaS platform will be a necessity in order to remain a viable public cloud option and meet customer demand.
American poker legend Doyle Brunson once said, “A man with money is no match against a man on a mission.” Logically, then, one with money and a mission should be unstoppable! With today’s funding announcement, at CloudSigma, we are effectively fueling our mission to provide the most flexible, user-centric public cloud in today’s infrastructure-as-a-service (IaaS) market.
The investment comes on the heels of our successful U.S. launch and from none other than our own Chairman Anthony Foy, and Director Phil Collerton, both of whom have strong backgrounds in data centers, enterprise software, cloud and managed services, and are well-positioned to help strategically shape and foster CloudSigma’s growth – growth that is already in the works! Indeed, plans are already underway for this influx of cash, some of which include:
This week, we’re at the 2011 Cloud Computing Expo in Santa Clara, California, experiencing a touch of nostalgia mixed with anticipation. For it seems like only yesterday that the idea of cloud computing was just becoming a reality, yet, here we stand, already experiencing the benefits of much more tangible cloud platforms and on the brink of some substantial advancements.
Once such advancement, we’re pleased to announce today from the show floor, is the incorporation of a new solid-state drive (SSD) storage solution into our public cloud infrastructure that helps eliminate one of the largest challenges and deterrents companies still face with the cloud – storage bottlenecks.
Greek philosopher Heraclitus is accredited with saying, “The only constant is change.” But, what if you don’t want to change? What then?
Many public cloud providers may follow Heraclitus’ philosophy as they impose restrictions on software and operating system deployments in their cloud infrastructure, forcing companies to change, but this is not CloudSigma’s strategy. Why should enterprises have to change their preferred infrastructure when they move to a public cloud? To make the transition as seamless and effective as possible, shouldn’t the cloud infrastructure mimic the physical, hardware-based data center to the best of its ability?
Jan Hedström of Techworld (www.techworld.se) recently interviewed CloudSigma CEO Patrick Baillie about our company's technology strategy and choices. A full transcript of the interview is included below
You use KVM virtualisation, while some other providers have chosen Xen. What factors were important when you made the decision to use KVM?
KVM forms part of the mainstream Linux kernel and for that reason has the full weight of the Linux developer community behind it. For virtualisation, stability and security are two critical elements, elements which open source software has time and again proven to be the best at delivering over the long term. Although a relatively late entrant into the hypervisor space, KVM has been gaining in momentum over time to the point where it is now overtaking other hypervisors like Xen in terms of performance and core functionality. It is no coincidence that companies like IBM, HP and Intel are fellow members of the Open Virtualisation Alliance whose aim is to promote uptake of KVM. Since making the choice of KVM early on we believe we've seen our choice validated. We fully expect KVM to continue to iterate faster than other alternatives and create a widening technological lead.
This week a location for the EC2 product of Amazon Web Services suffered a major extended outage. Predictably there has been a lot of hand wringing, proclamations that the cloud is unreliable etc. Actually this event should focus everyone's minds on what problems a move to the cloud solves and those that it doesn't. The cloud does solve many problems but not all.
There are some clear lessons to be learned from this latest outage, not just relating to the cloud but relating to how to build resilient infrastructure set-ups that can keep delivering when things go wrong (because they eventually will). In this post I'll examine what this outage tells us about the cloud and data centre based computing in general and how customers might best respond and adapt.
A couple of weeks ago CloudSigma launched its Cloud Affiliate Program. Now you are able to join the success of CloudSigma and create for yourself a significant source of revenue by promoting CloudSigma through the Affiliate Channel. You can earn up to 20% lifetime provision out of every sale which is referred through your affiliate link. Alternatively you earn up to 27.5% per Sale and up to 2€ per lead as one time payments if you choose to promote the Standard Provision Offer.
This is part 1 of a How To Promote the CloudSigma Affiliate Program blog series. In this first part we mainly cover the basics and some easy methods to get started. If you want to read more general information on Affiliate Marketing I recommend this Wikipedia article.
In part 1 we saw how the changing strategy of many public IaaS clouds has been to increasingly offer PaaS style services on top of their core product offering. It is easy to see why public IaaS cloud vendors might wish to do this in order to increase their revenues but is it in customers' real interests?
One impact is to have a lock-in effect on the customer base (using PaaS) to the particular cloud which is something customers might want to think about carefully. There are some even more worrying problems associated with the move to PaaS as the IaaS provider increasingly widens their role and comes into conflict with users of their own cloud. As I outlined below, a IaaS cloud vendor offering PaaS has a significant long term impact on choice for customers and the role of a cloud vendor as an independent provider of computing resources.
Over the last few months it has become clear that many of the largest Infrastructure-as-a-Service (IaaS) public clouds are increasingly morphing into more like Platform-as-a-Service (PaaS) operators. In other words, these clouds are moving from providing pure computing resources to providing services running on top of those computing resources. That may seem like a pretty innocuous change but as I'll outline in this and the next part of this post, it has a pretty profound effect on the role of the IaaS cloud vendor and their interests vis-a-vis their customers. Moving from infrastructure to offering software based services IS A BIG DEAL that customers should take note and be aware of when choosing an IaaS vendor.
Understandably, many customers using cloud services, particularly at the infrastructure level, are also interested in other tools that may add convenience. These are usually service driven; things such as email services, content distribution networks, DNS services etc. etc. So the debate here really is who is best positioned to provide those services and how should they be offered/delivered to end customers? It’s my opinion that an IaaS public cloud vendor is not the best entity to be trying to deliver and manage all those different services, particularly as they often require quite different skill sets. Likewise one size fits all services are never going to be optimal. So are many public IaaS vendors going in the wrong direction, chasing ever expanding service offerings?
I've aimed to outline over the previous three security posts to show how to secure your Infrastructure-as-a-Service (IaaS) cloud computing and how we as a vendor approach the various aspects to deliver our part of that solution. In this the final part I outline how we feel IaaS should be approached and how that has profound implications for the concerns currently dominating the cloud computing debate
Personally and as a company we very much believe that the concerns with public clouds currently are more vendor created than fundamental problems related to the concept. The issues of control and security stem from the fact that the large incumbent vendors are mixing the infrastructure and software/networking layers. With these platforms, a customer is forced to accept:
One of the greatest challenges in running an Infrastructure-as-a-Service cloud is how to deliver performance at a cost effective rate. The key metric behind this is the utilisation rate of hardware used in the cloud; too high and performance suffers, too low and prices inevitably rise. How can a cloud provider deliver performance and value for money in this case?
An IaaS vendor such as ourselves is a utility company. In the short term we have a fixed capacity so it is a classic yield management issue. Over-commitment is an old model deployed in a traditional shared hosting setting to keep utilisation high; that doesn't translate well to the cloud.
So far we've looked at securing the network and securing access to cloud infrastructure. In part 3 we look at how to ensure your data storage is robust, secure and kept private in IaaS clouds. Keeping data private and secure is a key concern of many looking to move to a public cloud. Its important to separate real dangers and how to avoid them from natural psychological reactions to moving data away from in-house provision.
We see data storage in the cloud breaking down into three distinct areas; keeping data private/secure, vendor transparency and data portability.
Delivering robust yet high performing storage in the cloud has been one of the greatest hardware and software challenges in the explosion of cloud computing. Poor storage performance from many leading Infrastructure-as-a-Service (IaaS) clouds is one of the most cited complaints by users. In this post I will outline the dominant approach to storage currently, our current approach and what the future holds for cloud storage. The great news is that a revolution in how data is stored and accessed is right around the corner!
As I outlined in my recent post on how to benchmark cloud servers, along with networking performance, storage performance is one of the key differentiating factors between different IaaS clouds. Storage performance varies widely across different clouds and even within the same cloud over time. While managing CPU, RAM and networking securely and reliably has been largely solved, delivery of secure reliable storage clearly hasn't.
Moving from traditional servers to cloud servers and infrastructure provides a golden opportunity to re-think computing architecture in order to take advantage of the flexibility and responsiveness that the cloud has to offer.
Not often discussed, server monitoring has only gained in importance with the move to the cloud. In this blog post I outline how server monitoring can form an integral part of your cloud infrastructure and, when properly implemented, how it can open up new avenues to significant cost savings whilst protecting performance.
In the second part of my series of blog posts on security I will cover the issue of securing access to your cloud server. Unlike with dedicated hardware, cloud servers offer you the ability to remotely conduct infrastructure management and other actions. These very powerful tools make running your cloud infrastructure significantly more convenient than dedicated hardware. It is however a double edged sword. Powerful management tools that have extensive access to your cloud infrastructure are also potent targets for those that would seek to gain unauthorised access to your cloud servers.
Infrastructure-as-a-Service (IaaS) offerings enable more points of access to your cloud infrastructure than dedicated hardware. There are four potential ways to access your cloud infrastructure, two are common with dedicated hardware equivalents, two are more specific to cloud infrastructure only. Those methods are:
Security is arguably one of the most discussed issues surrounding the use of cloud computing in general and it isn't a subject I can see losing its importance or interest any time soon. As a company we often get asked about our opinions on security so I thought it would be worth outlining some thoughts and experiences. Its a huge subject so I've broken it into a few parts, this first instalment will look solely at networking.
When considering network security we are actually looking at a few different areas. First and foremost we have the need to secure any device connected to the open internet from unauthorised access; that's a common challenge and is faced equally by ourselves and our customers. Secondly we have the job of securing your data in transit as it moves across our network and finally we have the important task of ensuring reliable high quality networking to the cloud. These are distinct challenges with differing solutions so we'll look at each in turn.
Many new customers when they start using CloudSigma want to test the performance; they are often looking to compare performance results between clouds and their own infrastructure and that makes sense. A straight price comparison by resource doesn't tell anything like the whole story; what really matters is the end result, how much does it cost to achieve a specific computing task? For any given requirement the amount of resources needed to achieve it may vary widely between clouds so comparing just prices doesn't work. The flip-side is that comparing performance in isolation isn't any better. Meaningful comparisons need to pull together both price and performance to calculate some measure of cost per computing unit. In this post I'm going to share some of my thoughts from benchmarking our cloud and others and provide some tips for getting useful results and what they really mean.
To explain upfront, I'm quite sceptical about benchmarking in general because it rarely offers a true insight into real world usage. In short there is no real replacement for running the actual applications you intend to use on the platform and at least simulating the load and usage patterns you would expect to deal with. If you can achieve this at a reasonable cost in terms of time then there is no replacement for such an exercise.
Ever since I joined CloudSigma there has been lively discussion of creating standards for cloud computing and within the Infrastructure-as-a-Service (IaaS) sector. There does seem to be precious little discussion regarding what a standard is actually meant to achieve, whether a standard is really desirable (read more below) or whether it is the the best way to achieve those ends. I'm going to look at each of these three in turn and present a counter-argument.
So first off, why are people looking to create standards in cloud computing and more specifically IaaS? Individual opinions vary but it seems clear that the main motivating factor is to make it easier to switch between cloud vendors and to simplify in general the IaaS space. On the face of it these both seem to be very laudable aims, reducing friction between clouds will increase competition, bring down prices and accelerate cloud adoption. How are these standards looking to be imposed and who is setting the agenda?
- Tags: CloudSwitch, Application programming interface, amazon web services, aws, industry standard, api, Switzerland, cloudsigma, IaaS, infrastructure-as-a-service, cloud hosting, cloud servers, cloud computing
I call CloudSigma a pure Infrastructure-as-a-Service (IaaS) company but what exactly do I mean by that? Well, I feel our platform is much closer to the true understanding of what IaaS should actually offer its customers in relation to control, flexibility and the direct relationship between usage and cost.
I'm aiming to set out in this post how we feel we are, in many ways, fundamentally different in the way we approach IaaS and how this can benefit the end-users of our platform.