Secure cloud server

Nine Strategies For Securing Your Linux Server

You Are Vulnerable

A system’s exposure is termed the “attack surface,” and all security measures attempt to reduce or remove what an attacker can learn about your system or how she can attack it. Security is a trade-off between what the industry recommends as best practices and what your business or objectives need to survive.

The moment that you expose a system to the Internet, you put it in danger. Like thieves checking doors and windows, thousands of infected machines scan for new victims 24 hours a day, and without a solid foundation of security, your servers will not survive.

What we want to achieve is a compromise that keeps us safe.

  1. What we can remove, we remove.
  2. What we can’t remove, we protect.
  3. What we don’t control, we isolate.

Security is never convenient. Human nature takes us down the path of least resistance, so it’s important to understand that trading security for convenience increases risk. Make informed decisions about that risk by imagining what might happen if the results aren’t what you had hoped for. Don’t trade short-term rewards (ease of access, not having to type a password) for long-term pain (compromised system, ransomware, loss of customer trust).

There are two groups of strategies that will help secure your system: system-level and application-level changes. The former covers work that you can do on the system itself to secure it, both in practice and in policy. The second covers work that you can do within your application. Since your application is the largest part of your Internet footprint, it needs the most attention to security.

Today’s article will cover the first set of strategies – system changes that will reduce the attack surface.

Basic Strategies

Install updates and keep them current

When you first install a system, you’ve installed a system with the information available as of a certain date. For Linux machines, it’s as current as the ISO that you downloaded. If you installed over the network from a minimal boot image, then it’s as current as the last time those packages were updated.

In all cases, you want to immediately install all system updates to bring the system up to the current security level for the operating system. Do this from behind some sort of firewall or NAT router — if you have a machine that will come to life already exposed to the Internet, use a network policy to restrict all inbound access other than what you require to connect to it safely. Restrict initial access to your own IP address, and do this before you install the OS.

After the install and update, develop a policy for regular updates and follow it. Production machines should not be updated without a person overseeing the process; doing so puts them at risk for services stopping or configuration files needing modification. This, in turn, puts your business at risk.

Instead, decide on a time when downtime is acceptable (a maintenance window) and run the updates during that time. System updates which affect the kernel or the core OS may require a reboot – be prepared to direct users to a “down for maintenance” page while rebooting.

If you have multiple instances behind a load balancer, plan a rolling update where 50% of the machines are removed from the load balancer, updated, and rebooted, then swapped with the other machines. If you schedule your maintenance window for a period of low traffic, your users won’t notice the interruption.

Remove unwanted software

System defaults are just that: defaults. They exist for you to change them, to tune the system for what you need it to do. Check for services that you don’t need and stop them. If possible, remove the software that runs them. Not only do these services take up valuable resources such as CPU and RAM, but they also expose you to danger in ways that you don’t control. The fewer services that are running, the fewer things that attackers can probe.

Check the services that you need, and make sure that they aren’t running with greater scope than you require. For example, if your webserver doesn’t need to talk to a Java Servlet Container or run CGI scripts or server-side includes, don’t load the modules that allow it do so.

Don’t keep components in place because you might need them later. Stick with what you know you need now, and you can add more in the future.

Use netstat to find ports that are listening for connections, both over TCP and UDP, and on both IPv4 and IPv6. If you don’t need that service, shut it down. If you can’t shut it down, make sure that your network policy restricts access to that port.

Use tmpfs and remove the executable bit

When programs need a location to write out temporary information, they write to /tmp or /var/tmp. These directories have special permissions allowing anyone to write to them, and attackers use them for privilege escalation.

If an attacker can compromise an application over the Internet, she might gain system access as the user running that application. For example, a compromise of the webserver, which must be exposed to the Internet in order to do its job, might give her access to the www-data user. This user can’t do anything significant on the system, but it does have access to /tmp.

The attacker can force the webserver to write shell code out to /tmp and then execute that code to retrieve attack payloads from the Internet and then execute them. These payloads run outside of the scope of the webserver (although only as the webserver user), so they have access to the entire system. The attacker will use these programs to attempt further compromise of the system, working to escalate her privileges to a user with greater access. Each level becomes a step toward the next.

A simple way to prevent this is to change /etc/fstab to mount /tmp and /var/tmp as a special type of in-memory filesystem called tmpfs. When re-mounting these directories as tmpfs, you can tell the operating system to prevent execution of files under these locations. This means that even if an attacker compromised the webserver and wrote shell code out to /tmp, she would not be able to execute that code to further compromise the system.

Use SSH keys for access

If you’re running Linux (or any Unix variant), your system is running SSH for remote access. SSH can use passwords for authentication, but this is not as secure as other methods. The easiest and fastest way to secure SSH is to disable password authentication and enable SSH keys.

Each user needing access to the system generates an SSH keypair (of 2048-bits or more in strength) and adds a passphrase to it. The passphrase is stored with the private key on the user’s computer, and only the public key is transferred to the server and added to ~/.ssh/authorized_keys in the remote user’s home directory.

When the user logs into the server, they are asked for their password on their own machine. This decrypts the private key and opens this channel for authentication of the user on the remote system.

A common mistake with SSH keys is to not provide a password for the private key. This trades security for convenience and exposes the system to any compromise of the remote user’s computer. It’s easier to secure a system that you control than it is to secure the computers of every person who might have login rights to your system.

Always use a password with your SSH keys. To avoid having to type the password on every login, activate the SSH Agent on your system. It will ask you for your password once when your computer boots and then use it with your SSH keys. This strikes a fine balance between security and convenience.

Use sudo, but use it wisely

Never log in as root except in case of emergency. Use sudo to run commands as root from a non-privileged user account.

One common pattern is to use sudo su - (or sudo -s) to run a shell as the root user. This is the same as logging in as root and not recommended unless you need to run a series of commands or access shell functions that are only available to root. A solid security practice is to let the additional time that it takes to type sudo before a command give your brain a moment to reflect on the command, being sure that you’re not about to execute some change where a typo can destroy your system.

Another common pattern is to use sudo with no password. This is super convenient but increases your exposure. If you have a user with full root-level sudo permission that requires no password, you’ve essentially created two root users and now have twice as many entry points to secure.

Read on to understand how this can hurt you.

Secure computers that have login permission

Access to your server isn’t limited to the things that you control. As soon as you grant login rights to a remote user, you expose yourself to the security practices (or the lack thereof) of that individual.

Many people don’t use login passwords or screen saver passwords on their computers. If they also don’t use a password on their SSH key, then anyone who can access their computer can access your server. It’s trivial to look through a user’s history to see where they’ve been logging in, and attackers always use a compromised system as a staging point for compromising other systems.

Imagine if you also don’t use a password with sudo. You’ve taken the concept of “root user” from a local, restricted-access user account and redefined it as “any person who has access to a computer I don’t control.”

There are situations where even the screen lock on a user’s computer won’t prevent someone from compromising the computer. Attackers have access to sophisticated malware with a low barrier to entry. A few seconds of access to a computer’s USB port is all someone needs to compromise the system and enable remote access, keylogging, and a host of other malicious activities.

Advanced Strategies

How do you protect your server from the remote threats I’ve described so far? Power it off and put it in a bank vault?

In order to secure yourself against the risks you don’t control, you have to install security measures that have the broadest scope of coverage.

One of those is called 2FA.

Use two-factor authentication with SSH

Two-factor authentication (2FA) or multi-factor authentication (MFA) require additional security steps beyond a password. The general concept of 2FA is “something you know” combined with “something you have” or “something you are.”

The password is always the “something you know.”

“Something you have” can be your phone or a physical token that generates passwords that have a 60 second window of validity. It might be a USB dongle that you have to plug into a port on the computer.

“Something you are” can be your fingerprint or a retina scan.

Combining two different types of information makes it harder for an attacker to compromise a system. They need the password and your phone, and if your phone is also secured with a fingerprint, they need you.

(None of these strategies will protect you against “rubber hose cryptanalysis,” which is where a dedicated attacker simply beats the password out of you, but if you’re someone who needs that level of protection, you don’t need to be reading this article.)

Two popular smartphone apps that provide 2FA time-based passwords are Google Authenticator and Authy. There are desktop apps that also provide the function, but using them violates the principle of 2FA. If you store your password in an app that also generates the second method of authentication, you simply have two instances of “something you know.” Your password to unlock that app gets an attacker to both the system password and the 2FA one-time password.

By securing SSH with 2FA you make it harder for the mistakes of your users to result in a compromise of your server. This keeps control on your side of the court.

Replace OpenSSH with Teleport

If you work in an enterprise environment, you may wish to replace OpenSSH with a compatible SSH server suite called Teleport.

Teleport is 100% compatible with all SSH clients that work with OpenSSH but adds additional features such as accessibility via a web client, session sharing, session recording for audit purposes, and enhanced security through client certificates.

When you purchase an SSL certificate for your server, you go through an authentication process where a trusted third-party validates that your company truly is who you say and that you truly do own the domain for which the certificate is issued.

A client certificate is the same thing for an individual. Unlike the server certificate, which uses a third-party authority that you pay money to, the validation of the client certificate is performed by software that you install and configure. This authority validates the user according to its configuration and then issues a certificate that grants the user access to systems that accept the certificate. This third party is called the certificate authority (CA).

Teleport provides all of these features in its suite for free. (Their enterprise version offers additional features.)

Once you have Teleport installed and configured, users authenticate with their SSH client to the CA server. The CA server validates the user according to password, 2FA, or other means, and then issues a client certificate to the user’s SSH client. The SSH client uses this to connect to the server, offering the client certificate for authentication. The server checks with the CA to see of the certificate is valid. If the CA says that it is, and if the holder of that certificate is allowed access to this server, the user logs in.

The process requires an additional step, but it doesn’t take much longer than a normal login process using SSH keys with a password. The security benefits, however, are tremendous.

The primary advantage of this workflow is that client certificates have an duration of validity, after which they expire. By default Teleport issues certificates for 12 hours, but it can issue them for as little as one minute or as long as one week. If the user tries to log in outside of the life of the certificate, they have to re-authenticate with the CA before the server grants them access.

You, as the system administrator, can also forcibly expire certificates at any time.

This gives your server the strongest protection from users and their own security practices.  No more copying keys around, remembering to remove them when users leave, dealing with the urgency of a user’s laptop being compromised or stolen – access is granular and automatically managed according to policies you set.

Put your applications in jail

Some applications, like the webserver discussed earlier, have to be visible to the Internet. That’s part of their purpose. When your application incorporates code from unknown sources, such as when you install WordPress plugins that you need in order to do something in your blog, your exposure goes up. You don’t know who wrote them. You don’t know how talented they were. You don’t know if their code is remotely exploitable and can turn a SQL injection into shell code execution.

You just don’t know.

What if you could put an application into its own isolated piece of the operating system, where it thought it was running normally but actually had very limited access to anything outside of what pieces of the OS it needs to operate?

You can. It’s called a jail.

FreeBSD and Solaris were early pioneers in the concept of jails. In early deployments, jailing used a concept called ‘chroot’ to tell the program that the “root” directory (the first / in all filesystem locations) was someplace other than where it truly was. If a program needed access to /tmp, it thought it was in /tmp, but in reality it might be in /jail/webserver/tmp. Since the program had no access to the real root of the filesystem, it couldn’t break out of its jail to use other programs, write data, or do anything else.

Today’s jails have a much fancier name. They’re called containers.

A container runs one or more processes in a virtualized environment. The container hypervisor manages the mapping of real system components into the container and protects the system by limiting what the container is allowed to do. Containers have a slew of benefits over jails, not least of which is the ability to slice up available system resources using cgroups to designate how much of a system a container is allowed to use. You might have 24GB of RAM, but you don’t want a low-priority container to hijack it all. With cgroups you can tell the hypervisor to not allow the container to use more than 512MB of RAM, and that container is effectively on a system with 512MB of RAM.

From a security perspective, the isolation of processes protects the system from the unknown. If you install a plugin for WordPress, and that plugin has a vulnerability that gives remote access to your webserver, nothing else on the system will be affected. It’s not nearly as good as not having a vulnerability at all, but where we don’t have the ability to control the access, we have to take what we can get. Mitigating the damage means less cleanup and a faster time to recovery.

Conclusion

Security is a multi-faceted discipline. It requires attention, perseverance and knowledge of what you’re running, why you’re running it, and what it means to do so. Play the game of “what happens if this part is compromised?” Figure out the potential damage and work to reduce it, either by removing the vulnerability or mitigating its impact.

The most important thing is that security starts with you, and it starts today. Your enemies never sleep. Don’t give them an opportunity to hurt you.

About Adrian Goins

Adrian started on this Internet adventure with a 300 baud modem in one hand and a C64 in the other. A voracious learner, he zipped through BASIC and BBSes and went bonkers when the Internet arrived. Within months of Slackware coming out he was running Linux systems for ISPs and firms in Colorado, and in a blink of an eye became the lead Java developer for a team at MCI/Worldcom. Two shakes of a lamb's tail after that he was building and managing datacenter environments for dotcoms in New York and London. He then founded a company that for the next 14 years provided outsourced IT management solutions for Fortune 1000 media companies. He is an expert in all things related to system/network/security administration, configuration management, and virtualization, and he has a black belt in making things work. When not building new systems or learning the limits of the latest bits of technology, he lives with his wife in Chile, builds advanced UAVs and aquaponics gardens, and raises peafowl, chickens, and dogs.

Leave a Reply