Last month, our CTO, Robert Jenkin, spoke at Cordis’ Research in Future Cloud Computing event in Brussels, Belgium. During the event, Robert presented to a diverse audience. He focused on three key areas of research that have the potential to have a major impact on the future of the cloud. Those are-operating systems, ecosystems, and scalable storage. Be sure to let us know what you think in the comments section.
Cloud Computing Operating Systems
Hardware virtualization is very well established as a technology with a number of hypervisor options. This is also true for cloud computing management by and large. Customers however install standard operating systems that aren’t really aware of their virtual environment; this leads to many restrictions and sub-optimal performance in many cases.
For example, operating systems can’t recognize ‘hot’ changes to resources. This means that while the hypervisor can allocate additional RAM or CPU to a virtual machine, the operating system can’t recognize or see it. The result is that vertical scaling in the cloud. In other words, making individual virtual machines bigger or smaller needs power cycling of the virtual machine. This is disruptive and, in many cases, not practical. Instead, people largely engage in horizontal scaling by clustering their computing over many virtual machines and monitoring load. This is quite wasteful through duplicated operating system resources across those machines and the communication/management overhead of clusters. This is one key area that needs addressing in order to continue to make computing in the cloud more efficient.
Companies and institutions in today’s connected economies have wide and varied ecosystems of suppliers and customers surrounding them. Each industry, in turn, has unique workflows and requirements. Moving individual companies and individual components of those workflows to the cloud isn’t effective and often destroys many of the economic benefits of using the cloud through time delays, accessibility issues and increased data transfer costs.
Ecosystems have a vital role to play in delivering widespread cloud adoption within industries. Building ecosystems in the cloud means working within an industry or product area to move supply chains and workflows into the cloud in a more holistic way. This keeps work and infrastructure within the cloud. At the same time it leads to super-charging coordination, reducing lead times, and speeding up iteration cycles, both within companies and between collaborating entities.
A great example of an ecosystem being built is the Helix Nebula consortium. CloudSigma has been working as part of the Helix Nebula consortium to build ecosystems around the big data and processing requirements of leading scientific institutions. Using the cloud as a hub, the huge data transfer requirements go internally over 10GigE lines within the cloud. Results from new data are quickly pushed out to a wider audience. Likewise, outside entities can draw on that data for their own needs at little or no cost. And this is all within the cloud.
Similar to the workflow and big data requirements of the scientific community, the digital media sector would greatly benefit from collaborative cloud-based hubs. This is precisely what we’ve created with our Media Services Ecosystem. CloudSigma’s Media Services Ecosystem allows service providers and production companies working in the film, music and other media industries to easily collaborate and share data. We’re basically giving media companies one roof under which to work together more efficiently; regardless of their geographic location.
Eliminating long transfer times and high data transfer costs that plague most productions is critical to offering a viable alternative to existing slow, high-cost in-house alternatives currently in use today. Ecosystems will, therefore, be a pivotal feature of driving cloud adoption in many industries in the future. Customers and cloud vendors need to collaborate to help to work together in creating them.
Currently, at CloudSigma, we are working to optimize storage systems to go beyond the performance levels of dedicated hardware. We believe public cloud computing as a delivery mechanism can offer higher performance to customers at a lower cost.
Storage for public clouds has to date been largely dominated with dealing with the problems created by increasingly random-looking I/O requests caused by multi-tenancy. In principle, the cloud can offer individual customers higher performance and less variable performance levels. This is done by spreading storage loads across a greater bed of hardware. When customers run a SAN environment for example, the load on the system correlates 100 percent to that particular company’s usage. Moving to the cloud can spread that load over a wider install base. Thus, drawing on additional resource capacity during peak times. In addition, it is making the impact of any one customer on any one particular piece of infrastructure, minimal.
To take advantage of the load spreading, storage systems used in the cloud need to evolve from the SAN and local storage systems of today to distributed systems that offer modular scalable high availability designs. Some of the components are now coming together however a lot more work remains to be done.
Object storage is an area that is seeing astronomical growth as customers take advantage of the total scalability and convenience of an outsourced storage arrangement. The amount of data in the world continues to grow at an accelerating rate. With that, the use and importance of object-based storage will continue to increase exponentially. To date, outside of Amazon’s S3, full-featured object storage environments have been very limited. A lot of work remains to build open source components and standards that can widen the install base for customers of object storage who don’t want to get locked-in to anyone proprietary platform.
Overall, a very significant amount of areas for innovation remain in the IaaS space. They will deliver year on year improvements in price/performance for the foreseeable future. As these innovations mature, it’s clear that the public cloud delivery mechanism will continue to gain ground. At the same time, legacy systems will become more and more uncompetitive.
To see Robert’s full presentation from the event, follow this link http://cordis.europa.eu/fp7/ict/ssai/docs/future-cc-2may-jenkins-presentation.pdf, and let us know what you think in the comments section below!
- Manage Docker resources with Cgroups - May 12, 2015
- Docker, Cgroups & More from ApacheCon 2015 - April 30, 2015
- How to setup & optimise MongoDB on public cloud servers - March 24, 2015
- Presentation deck from CloudExpo Europe - March 17, 2015
- CoreOS is now available on CloudSigma! - March 10, 2015