At trade shows, one of the most common questions people get asked is where they live. Well, for CloudSigma this month, the best answer may be “on an airplane!” April was jam-packed with events as our executive team traveled all over the U.S., speaking at some of the most influential conferences and networking with the biggest thought leaders in the industry.
Big science in Europe has been busy and here at CloudSigma we've been heavily involve in helping to build a federated cloud platform that can address the needs of big science. It is something I'm personally very excited about as it holds then potential to unleash large-scale computing for some of the world's thorniest problems including genetic research, climate change, disaster relief, high energy physics and more. Solving the world's problems are increasingly relying on computing power to test assumptions, prove solutions and discover the behaviour of complex systems. Lets get computing in a highly flexible way into the hands of great scientists!
Last month, our CEO Robert Jenkins joined other industry thought leaders, including Paul Miller, Jo Maitland and Dana Gardner, in the GigaOM webinar: How to make Big Data Work in the Cloud. As Big Data and cloud go hand-in-hand and are often discussed simultaneously, the four panelists had a lot to cover, including hybrid clouds, cloud security and the importance of blending customers’ computing requirements.
With 2.5 quintillion bytes of data created every day, it’s not surprising companies are worried about the advanced computer infrastructure required to handle their data-intensive workloads. Such infrastructure is not only expensive, but also difficult to maintain. Luckily, the answer to this concern may lie in today’s most innovative technology environment: cloud computing.
By taking advantage of powerful cloud computing platforms, companies can access the computing resources they need at a competitive cost and without the need to constantly procure and scale cumbersome in-house IT infrastructures. To handle the unpredictable nature of research science and the high volumes of data produced on a daily basis, it’s essential to have a cloud platform that places no restrictions on server sizes, software or networking, and offers a fully scalable and customizable infrastructure.
What do Facebook, Barack Obama and software-defined networks (SDNs) have in common? Normally, the answer would be ‘not much,’ but in 2012, the three had one big commonality: they were everywhere. Facebook acquired Instagram in April, followed by its decision to go public in May; Obama was re-elected as the United States president; and VMware acquired SDN-leader Nicira, cementing the technology’s importance in the industry. While it would be difficult to top the media buzz that surrounded each of these news stories, CloudSigma also had one of its strongest years yet. In addition to numerous customer wins and innovation, our CTO Robert Jenkins, traveled all over the world speaking at some of the most influential conferences in the technology industry.
Recently, our COO, Bernino Lind, sat down with ‘This Week in Startups’ host Jason Calacanis. CloudSigma and the broadcast show know each other well, as we are proud to call the company a customer, enabling the scalability needed to handle the hundreds of thousands of people that download episodes every month. Throughout the 13 minute episode, the pair talked about everything from their love of smørrebrød, to the likeliness of scenes from The Terminator manifested in Switch’s SuperNAP data center!
Jason and Bernino also spent much of the time discussing the pain points that CloudSigma’s offering addresses that competitors’ services don’t. To explain CloudSigma’s commitment to offering unbundled resources and flexibility, Bernino offered up an example: Companies that may need a lot of RAM, but a significantly lower amount of CPU, have no choice but to spin up “large, super-duper oversized instances” in Amazon’s cloud. By nature, this is cost-prohibitive, and we avoid it by allowing customers to purchase resources as they need them, on a pay-as-you-go basis. As Bernino put it; “We want to give you the infrastructure that makes sense for you.”
Running data centers is seen as an additional task to the work of research organizations like CERN, EMBL and ESA. With the amount of data being stored reaching 23 petabytes a year in publicly owned data centers, inception of a science cloud was fueled by the scientists themselves.
According to comScore, from February 2011 to February 2012 U.S. online video consumption from grew at an astonishing rate of 660 percent. With video emerging as one of the most impactful sectors of the media industry, vendors are scrambling to get a piece of the market. Therefore, video providers must offer the fastest, most cost effective solution in the market in order to stay competitive. Because of this, many providers are moving their workload to the cloud in order to leverage the scalability, flexibility and cost-savings it provides.
That’s why iStreamPlanet, the leading provider of live and on-demand streaming-video solutions, is moving its portfolio of media workload products and services to the cloud, using our innovative and customizable IaaS platform. Other proprietary technologies are not only too expensive, cumbersome and complex for the quality video experience that customers expect, but they simply cannot scale to meet the growing demand for more content and longer viewing hours that accompany the booming video industry.
In a recent LinkedIn posting on the IaaS Infrastructure as a Service forum, a conversation developed around the issue of cloud providers using the ability to over commit resources to drive down their prices. Essentially, this means that while a provider may have the resources (CPU, RAM, storage, etc.) to support a specific volume of a computing resource, they know that the vast majority of cloud servers won’t be utilized their resources at full capacity. This leaves them with spare resources to sell additional cloud servers with without having to increase the actual amount of physical resources.
While this business model (very prevalent from traditional web hosting) is a valid concern on many fronts, it is only one part of the problem. Many cloud providers, especially those routed in the hosting industry, also bundle their resources together to build fixed server instances, thereby creating an inherent issue of over purchasing i.e. contention of resources. The nature of computing is such that the customer needs enough CPU, RAM, storage, etc. So, the customer must pick a bundle that covers the minimum necessary of each resource. With bundling of resources and fixed server sizes, the chances are that that means getting way too much of one or more resources in the process. We call that baked-in over purchasing.
Last month, our CTO, Robert Jenkins, spoke at Cordis’ Research in Future Cloud Computing event in Brussels, Belgium. During the event, Robert presented to a diverse audience on three key areas of research that have the potential to have a major impact on the future of the cloud – operating systems, ecosystems and scalable storage. Be sure to let us know what you think in the comments section and click here to see Robert’s full presentation from the event.
Hardware virtualization is very well established as a technology with a number of hypervisor options, as is cloud management by and large. Customers however install standard operating systems that aren't really aware of their virtual environment; this leads to many restrictions and sub-optimal performance in many cases. For example, operating systems can't recognize 'hot' changes to resources, meaning that while the hypervisor can allocate additional RAM or CPU to a virtual machine, the operating system can't recognize or see it. The result is that vertical scaling in the cloud (i.e. making individual virtual machines bigger or smaller) needs power cycling of the virtual machine, which is disruptive and, in many cases, not practical. Instead, people largely engage in horizontal scaling by clustering their computing over many virtual machines and monitoring load. This is quite wasteful through duplicated operating system resources across those machines and the communication/management overhead of clusters. This is one key area that needs addressing in order to continue to make computing in the cloud more efficient and relevant.
Technology startup companies just starting out have a lot of options in terms of IT environments. Do they invest in a proprietary, physical environment, which may very likely become outdated within a matter of one year? Do they look to implement a hybrid approach with some virtual servers hosted on top of their otherwise physical infrastructure? Or, do they move to the cloud and start off hosting their environment with a third-party public cloud IaaS provider?
Choosing an environment can be tricky, especially as startups often have very specific requirements and experience rapid growth. They need an infrastructure environment that can easily grow with them, but one that’s also flexible enough to meet their constantly changing demands and doesn't impose commitments or large upfront costs. Startups can't afford to close any doors when it comes to their long-term strategic future and staring out on a proprietary cloud which significant vendor lock-in might not be a good choice long-term; thanks to our uniquely flexible and customizable public cloud design, more and more startups are choosing CloudSigma.
Gigaom published an interesting article on Azure embracing SSD storage. It is great to see some movement and product development happening in the IaaS space as ever particularly when it comes to addressing one of the key pain points for many customers in the cloud, namely cloud storage. We have long advocated the need for innovation in cloud storage which is why we were the first major public cloud to launch SSDs in Q4 2011
SSD storage is critical to moving core systems into the cloud however it needs to be delivered as part of a tiered storage approach. That means having a flexible implementation of IaaS itself which enables customers to easily tailor on a server by server basis, resources and storage types just as customers would do in dedicated hardware.
Many people, when moving to the cloud, look at it primarily as a means of saving money. Although I concede this impression is growing less prevalent, it still persists. Furthermore, most people taking this cost approach look at the savings purely from physical infrastructure rather than through labour, which represents the majority of real infrastructure costs through management, procurement, monitoring etc.
In fact, the key value most customers find from embracing cloud, once the dust settles, is in how they deliver services and run their businesses. Turning fixed assets into flexible cloud resources has a profound effect that ripples through the whole company. Although moving infrastructure in a 'business as usual' way to the cloud still derives many benefits, it’s the changes that occur afterwards that prove to be the most valuable. These changes happen as people in that organisation start to engage with the cloud and understand how to improve their working practices as a result of the newfound flexibility inherent in their cloud deployment.
This week, we’re at Interop Las Vegas, and the show floor is buzzing with new announcements from exhibiting companies – CloudSigma included! Today, we announced the launch of SigSTORE; the industry’s first SLA-backed distributed SSD storage solution, for our public cloud IaaS.
The performance, cost and reliability of cloud storage solutions have traditionally been major barriers to companies’ cloud adoption. But, with SigSTORE, we are effectively removing these concerns by guaranteeing the performance and reliability of the storage for companies’ critical systems and applications. Now, without these concerns, companies can readily more core systems onto cloud infrastructure with performance protected under SLA and be able to realize everything the cloud has promised, including greater flexibility, scalability and cost-savings.
The scene here at the 2012 NAB Show in Las Vegas is stirring up a lot of excitement. All around the exhibition hall, one can catch glimpses of how rapidly new technologies are reshaping the media and entertainment industries. Hundreds of innovative vendors are showcasing and unveiling innovative new and disruptive offerings that are helping to usher in the next generation of media production and distribution. This looks to be a great view on the intersections between media and cutting edge tech.
Here at booth N3222H, we are making our own major announcement: the official launch of the CloudSigma Media Services Ecosystem. That may sound like a mouthful, but the idea is actually quite simple and elegant. Our Media Services Ecosystem is a public cloud environment in which service providers and production companies working in the film, music and other media industries can easily collaborate and share data, with access to CloudSigma’s powerful compute and storage capabilities. We’re basically giving media companies one roof under which to work together more efficiently, regardless of their geographic location, eliminating the long waits and high costs that plague most productions.
The overused cliché “one size fits all” hardly applies to anything these days, including cloud platforms. Some utilize the cloud to sustain the high storage requirements needed to process huge amounts of research data efficiently and accurately. Others need the flexibility and high-performance computing the cloud allows for work with some of the largest high-res, long-form content files in the media industry.
In today’s changing IT landscape, having the option to hop from one provider to another in order to fulfill a company’s varying IT needs, is essential. It seems odd, then, that some cloud providers still standby the mantra “one size fits all” and impose vendor lock-in constraints, making it challenging for companies to take advantage of the full breadth of innovation in the cloud space.
These monumental scientific undertakings have very different goals, but one important thing in common: the huge amounts of data that must be processed efficiently in order to yield accurate results. Unfortunately, the advanced computer infrastructure required to handle this Big Data is expensive and requirements are growing rapidly. International organizations such as CERN, the European Molecular Biology Laboratory (EMBL) and the European Space Agency (ESA) that are engaging in scientific research need constant expansion in their infrasturcture to keep delivering the processing capacity they rely on. Without access to the right resources, researchers within these organizations are potentially held back.
A lot goes into making a Blockbuster hit that tops box office charts. Movies like Titanic, Star Wars, Harry Potter and Lord of the Rings don’t just come together over night. Steep requirements are imposed on everything from production, staff and equipment, to location, music and editing; but, one of the most hefty requirements often goes overlooked – IT. Technology is fundamental to the media industry’s ability to successfully create that next Avatar or Toy Story film; especially as the IT requirements for such an endeavor are inordinately robust.
Media professionals regularly work with some of the largest high-res, long-form content files, which can reach hundreds of GBs if not TBs on a single project. As a result, the upload and download times and transcoding process can become very lengthy, delaying both productivity and production. Even the most successful media companies grapple with finding the ideal environment that’s capable of handling such high performance computing requirements.
Recently, I was featured on GigaOM, discussing the pros and cons of in-house data center ownership for public cloud infrastructure-as-a-service providers. With the widespread media attention dedicated to recent outages, including Amazon’s latest downtime, it’s evident that maintaining consistent data center uptime can be a real challenge. So, the question remains, are IaaS providers equipped to deliver resilient data center facilities in addition to their public cloud infrastructure offerings?
At a high level, the article describes that, while there are certain advantages for cloud providers operating data centers in-house, including greater control, capacity, power and security, the challenges, such as geographic expansion, connectivity, location, cost and lower-tier facilities often outweigh the benefits.