Infrastructure security is arguably one of the most discussed issues surrounding the use of cloud computing in general. And it isn’t a subject I can see losing its importance or interest any time soon. As a company we often receive questions about our opinions on security. So I decided to outline some thoughts and experiences. Its a huge subject so I’ve broken it into a few parts, this first instalment will look solely at networking.
When considering network security we are actually looking at a few different areas. First and foremost we have the need to secure any device connected to the open internet from unauthorised access; that’s a common challenge and is equally common for us and our customers. Secondly, we have the job of securing your data in transit as it moves across our network. And finally we have the important task of ensuring reliable high quality networking to the cloud. These are distinct challenges with differing solutions so we’ll look at each in turn.
Keeping Cloud Server Infrastructure Secure
In the area of security, some challenges are unique to the cloud, others are common to the internet in general. The challenge of keeping infrastructure secure and controlling access is common to any device connected to the internet. And the solutions are also no secret. Here are just a few of the policies our company follows:
- we are an all Linux company (a similar policy was also recently adopted by Google). From office to data centre all our equipment runs Linux. We have this policy in order to ensure the high level of security necessary in our infrastructure. Customers still have a free choice of operating system on their cloud servers of course.
- all ‘out of data centre’ equipment (such as desktop computers, laptops etc.) must have fully encrypted hard drives (256bit AES).
- all access passwords must be at least 8 random alphanumeric characters. Out of data centre equipment passwords must be at least 16 random alphanumeric characters. We make everyone learn long random passwords whatever their role in the company.
- all operating systems must be kept patched for security updates.
- limit access to only those needing access on a per device basis. Maintain good records of user access permissions.
- unauthorised login attempts should be logged and monitored.
These are just a few of the measures we employ to maintain a secure environment; for sure it’s an effort but it works. It is a matter of company culture as much as anything else. The simple act of having a secure password, changing that password regularly and maintaining different passwords for different devices goes a very long way to securing infrastructure that is openly accessible to the internet. A great many security systems can get around the fact that people usually choose insecure passwords, we take the opposite approach. Start with making sure everyone has very secure passwords and work from there.
Turning to our cloud servers, we have an open networking policy in our cloud. That means your server does not sit behind any firewall or protection. It allows you, the user, to set up your server and networking how you need it to work without outside interference. Our customers love the level of control they can achieve in our cloud but its important to secure your infrastructure too. That’s why in our pre-installed systems we don’t have SSH turned on initially. Its important that users boot up, access via VNC and reset the default passwords before allowing the machine to be accessible from the outside world.
For any customer with a multi-server set-up we highly recommend using our private networking feature. By adding servers to a VLAN, they have a second networking card added to that server. If desired, the public networking interface can then be disabled. In this way, users can create private networks within our cloud, not accessible via a public IP address. Many of our customers create private clusters that sit behind specialist gateway servers. Again, in our cloud we give you full control to create and secure your own infrastructure in a way akin to dedicated hardware.
The truth is, your cloud server is no more or less secure than its dedicated server equivalent when it comes to securing it against outside unauthorised access. The same steps and policies should be adopted that are the most effective in the dedicated server hosting world at maintaining the integrity of servers.
Securing your ‘Data in Transit’ within the Cloud
One of the key challenges of the cloud and Infrastructure-as-a-Service in particular is maintaining meaningful separation between different users on common infrastructure. This directly affects the security of users of the cloud when it comes to keeping ‘data in transit’ secure. Its important that one user is not able to view the internet traffic of another user. This is the networking challenge in a multi-tenant environment.
As a principal, different user networking traffic should be separated at the lowest level possible to provide the highest integrity separation. In our cloud the we separate traffic at the hypervisor level. We use the Linux Kernel based Virtual Machine (KVM) hypervisor, one of the key reasons it was chosen was because of its speed of development and widespread support base within the Linux community.
With traffic separated at the hypervisor level, any individual user cannot view the traffic of another user. The integrity of that separation is the integrity of the hyper-visor itself. KVM is a popular hypervisor that is extremely robust. The chance of the hypervisor being compromised is actually significantly smaller than the chance of a security hole being found in the software running on a customer’s own cloud server.
We don’t stop there however, we distinguish between private and public network traffic. All traffic going between public IP addresses is routed over one set of hardware and networking devices with private networking traffic routed over an entirely separate physical network. So private data traffic routes entirely differently to public network traffic, the two streams do not mix. This physical separation allows data within customer VLANs to enjoy full physical separation whilst in transit as well as hypervisor separation from other private networking traffic.
With open networking its possible for users of the cloud to create secure encrypted VPN connections between their cluster in the cloud and corporate infrastructure. This solution allows end-to-end encryption of data and is applicable to some of our larger clients.
In our cloud each cloud server has no shared software resources and is visible to the network as a physical device. What is striking is that from a networking perspective, the cloud doesn’t actually present significant challenges beyond those faced by traditional dedicated server set-ups. One might argue that the ease with which private networks and VPNs can be implemented in our cloud makes their deployment and use a lot more cost effective. From my experience I’d certainly argue that many of our customers end up with a more robust set-up from a security perspective than they had under their previous arrangements because of this ease of deployment and reduces the costs of such measures.
The Challenge of Maintaining Network Quality of Service in a Public Cloud
One challenge that is quite unique to cloud computing at the IaaS level is maintaining quality of service of networking whilst experiencing a wide variety of network traffic types that is often quite unpredictable and varies over time.
In a traditional enterprise environment, the infrastructure and the network it used could be well defined in terms of usage. Traffic is predictable, if not by quantity (which is also usually relatively predictable) then by type. This allows a relatively static network configuration allowing the sort of traffic expected and blocking everything else. Likewise, in this sort of single tenant environment, the opportunity for malicious attacks by other tenants (or against them) is of course zero.
When we move to a multi-tenant public cloud environment, everything changes. We now have disparate, unpredictable networking traffic that can change significantly in nature from hour to hour. Likewise it is important to maintain a high quality networking environment for very different uses.
The first thing to realise is that such a general purpose network will never perform as predictably and in such an optimised way as a specialised dedicated network designed for a single purpose. That said, the question then becomes how much difference in performance actually occurs. And how does this weigh up against the many benefits of building on a cloud infrastructure. The answer to that question will vary from user to user depending on their previous experience. And on the cloud vendor’s infrastructure and the nature of their computing needs. As I’ve outlined before, there is no real replacement for testing and benchmarking your cloud server in relation to your specific usage.
Mian IaaS vendor its our job to ensure the best quality of service at all times. This breaks down into two categories, precautionary measures and reactionary measures. On the precautionary side, the balance is always between preventing abusive networking usage and interfering with legitimate customer traffic. There is no correct line to draw here. Its easy to eliminate out and out abusive traffic without much fear however there is a very wide band of traffic that could fall under abusive or genuine.
On the reactionary side, as a company we don’t believe in prescriptive measures actually being able to deliver reliable networking performance over time. Any sensible framework on the precautionary measures side needs to be complimented by robust reactionary measures. In fact that is where I personally feel the real work of a public cloud vendor lies within the networking space. Constantly monitoring and analysing network traffic is as much an art as a science. Working closely with upstream providers is the only effective means of stopping distributed denial of service (DDOS) and other attacks. My view is very much that precautionary measures fall far short of what people need nowadays to ensure reliable performance in network connectivity. It is the ability to react and neutralise threats to the network that is the only effective means.
Over time we also seek to lower latency to locations we see gaining increasing traffic. We do this by monitoring our network latency from all over the globe and peering directly with other ISPs. We are lucky to have over 35 major carriers that we can peer with within our building. Through this iterative process we can continue to improve our networking performance over time.
Some of the challenges faced in the network security field are by no means unique to cloud vendors. The multi-tenant nature of a public cloud network is clearly the greatest challenge specific to IaaS cloud vendors. In practice, the security of the cloud’s network comes down to these factors:
- A combination of robust company policies.
- Well thought out network software.
- Architecture and having an ability to evolve and react quickly where necessary to threats as they present themselves.
- CloudSigma Expands Public Cloud to Melbourne, its Second Australian Location - April 3, 2017
- CloudSigma Launches new German Cloud location with DARZ - September 12, 2016
- CloudSigma Expands its Global Footprint to the Australian Market - May 17, 2016
- SSDs gaining traction in the cloud - June 7, 2012
- Let the cloud add real value - May 28, 2012