Last month, our CEO Robert Jenkin joined other industry thought leaders, including Paul Miller, Jo Maitland and Dana Gardner, in the GigaOM webinar: How to make Big Data Work in the Cloud. As Big Data and cloud go hand-in-hand and are often discussed simultaneously, the four panelists had a lot to cover, including hybrid clouds, cloud security and the importance of blending customers’ computing requirements.
Webinar attendees, whose positions ranged from C-level executive to IT manager, were given the poll question: ‘Where do you currently process Big Data?’ As expected, ‘private clouds’ scored the highest at 32 percent, but, much to the panelists’ surprise, ‘hybrid clouds’ was farther down the list, at 11 percent.
Typically, the scalability, speed and cost benefits push enterprises in the direction of the cloud, but the cloud’s ongoing security concerns, as well as the impact the cloud’s location can have on performance and legal regulations, are cause for the same companies to keep critical data on dedicated hardware. In fact, attendees named security as the biggest consideration when choosing where to process Big Data, according to the webinar’s second poll question. But, as with many of the obstacles that keep companies from migrating to the cloud, lack of security is often cited as a vendor problem, as opposed to a problem that’s inherent with cloud computing. Innovative public cloud providers with open software layers and no deployment restrictions have already addressed the issue, allowing customers to manage and configure security measures as they see fit.
Throughout the webinar, security wasn’t the only factor top of mind for companies toying with the idea of the cloud for their Big Data needs. Capacity management has also been a longstanding issue for cloud providers and the panelists agreed it must be overcome before many companies with fluctuating Big Data workloads can implement the cloud. Forward-thinking cloud providers have not only already addressed this issue, but have leveraged it in a way that is mutually beneficial for themselves and their customers. By enlisting customers from a variety of verticals, providers can balance out compute needs so that customers whose requirements aren’t immediate don’t interfere with those that are. Illustrating this point, Robert explained that, after all, “meteorologists don’t need to be doing weather predictions on Black Friday.”
As the webinar came to a close, the panelists cited a few other obstacles to Big Data computing in the cloud, such as some providers’ unfavorable “menu approach” and the inability for companies to truly compare performance between providers. As the cloud becomes more and more mainstream, there is still plenty of room for improvement and opportunity in both Big Data and beyond. Regardless, the cloud offers many companies relief from their increasing Big Data woes, and as the market advances, it is likely many will find that the speed, scalability and performance benefits that the cloud offers are second to none.
If you missed last month’s webinar, you can view a recording of the big data cloud webinar.
Share this Post
- Manage Docker resources with Cgroups - May 12, 2015
- Docker, Cgroups & More from ApacheCon 2015 - April 30, 2015
- How to setup & optimise MongoDB on public cloud servers - March 24, 2015
- Presentation deck from CloudExpo Europe - March 17, 2015
- CoreOS is now available on CloudSigma! - March 10, 2015