Deploy a PHP Application on Kubernetes featured image

Deploy a PHP Application on a Kubernetes Cluster with Ubuntu 18.04

Kubernetes (also known as k8s) is an open-source orchestration system. It allows users to deploy, scale, and manage containerized applications with minimum downtime. In this tutorial, you will learn how to deploy a PHP Application on a Kubernetes Cluster.

Nginx behaves as a proxy to PHP-FPM while running a PHP application. Managing these two services in a single container is a difficult process. Kubernetes helps us manage them in two different containers and reduces the hassle. It also allows users to reuse the containers and not worry about building their container image for every new version of PHP/Nginx.

You will run your application and proxy service in two separate containers. The tutorial will also provide insights on how to use local storage to create a Persistent Volume (PV) and Persistent Volume Claim (PVC). You will then use this PVC to keep your configuration files and code outside of the container images. After completing this tutorial, you will be able to reuse your Nginx image for other applications that require a proxy server. You can achieve this by passing a configuration, instead of rebuilding the image for it.

Prerequisites

  1. A basic understanding of Kubernetes (k8s) and its objects. Refer to this guide for a detailed overview of the Kubernetes ecosystem.
  2. A Kubernetes cluster that is up and running on Ubuntu 18.04. Follow this tutorial to create your Kubernetes cluster using kubeadm.
  3. In addition, you need to host your application code on a public URL, for example, GitHub.

Step 1: Create PHP-FPM and Nginx Services

This step will help you create PHP-FPM and Nginx services. Any service provides the access to a set of pods within a cluster. All the services present in a cluster can communicate to each other with their names, without IP addresses. The PHP-FPM service and Nginx service will provide access to PHP-FPM and Nginx pods, respectively.

You will need to tell the PHP-FPM service how to find the Nginx pods as it will act as a proxy for the PHP-FPM pods. For this, you will take advantage of Kubernetes’ automatic service discovery and use human-readable names to route the request to the respective service.

In order to create any service, you will need to create a YAML file that contains the object definition. This YAML file has at least the following tags:

  1. apiVersion: The Kubernetes API version to which the definition belongs.
  2. kind: The kind of Kubernetes object this YAML file creates. For instance: a service, a job, or a pod.
  3. metadata: The name of the object and the different labels that the user might want to apply to this object are defined under this tag.
  4. spec: This tag contains the object specification of your object, such as ENVs, container image to be used, ports on which the container service will be accessible.
Creating the PHP-FPM service

To start with, you should create a directory to keep your Kubernetes object definition. Log in to your master node and create a directory named “definitions:”

Change the directory to the definitions directory:

Next, create your PHP-FPM service file as php_service.yaml file:

After that, set the apiVersion and kind in the php_fpm_service.yaml file:

Name your service as php or php-fpm as it will provide access to your PHP-FPM application:

Label your php service as tier: backend as the PHP application will run behind this service:

A service uses the selector labels to determine which pods to access. Any pod that matches these labels, irrespective of when the pod was created, is serviced. You will learn how to add labels to your pods later in this tutorial.

Include the tier: backend label that assigns your pod into the backend tier, along with app: php-fpm label to indicate that the pod runs a PHP-FPM application. You must add these labels after the metadata section:

Next, you need to declare the port to access this php-fpm service under spec. You can add any port of your choice, but we will use port 9000 in this tutorial:

Once done with the above steps, your php_fpm_service.yaml file will look like this:

Enter Ctrl + O to save the file, then enter Ctrl + X to exit nano.

Applying the kubectl command to create the PHP service

As the object definition for your service is created, run kubectl apply command with -f argument by specifying your php_fpm_service.yaml file:

The output of the above command should be:

Run the below command to verify that your php-fpm service is running:

You will be able to see the php-fpm service up and running:

Note: Kubernetes supports various types of services. Your php-fpm service uses the default ClusterIP service type. This type of service assigns an internal IP and makes the service reachable from within the Kubernetes cluster only.
Creating  the Nginx service

Since your PHP-FPM service is ready now, it is time you create your Nginx service as well. Create and open a new YAML file for this service, called nginx_service.yaml in the editor:

Name this service as nginx as it will target the Nginx pods. This service also belongs in the backend so you should add a tier: backend label to it:

As we did in the php-fpm service, add the selector labels app: nginx and tier: backend to target the pods. Add the default HTTP port 80 to access this service:

The Nginx service can be publicly accessible on the internet from the public IP address. You can add your worker node’s IP as your_public_ip. Add the below lines under spec.externalIPs:

Your nginx_service.yaml file should look like the one below once you complete all of the above steps:

Save and close the file after adding all the required parameters above.

Applying the kubectl command to create the Nginx service
You should see the below output for the above command:
Now, execute the following command to view all your running services:
By running the above command, you should be able to see both your PHP-FPM and Nginx services up and running:
Note that if you wish to delete any of your running services, you can execute the below command:

Step 2: Create Local Storage and Persistent Volume

Kubernetes provides various storage plug-ins that help you create storage space for your environment. This step will guide you on how to create a local StorageClass and how this Storage Class can be further used for creating Persistent Volume.

Creating a local storage

Create a file, say storageClass.yaml, in your editor:

Add kind as "storageClass" and apiVersion as "storage.k8s.io/v1" as follows:

Name this StorageClass as "my-local-storage" and add provisioner and volumeBindingMode as follows:

Save and exit the file and your final storageClass.yaml file should look like this:

Now, create the StorageClass by running the kubectl create command, as below:

After running the above command, you should get the below output:

Creating local Persistent Volume

After creating Local Storage, you can create your local Persistent Volume. A Persistent Volume, also known as PV, is the specified-sized block storage that is independent of the life cycle of a pod. A local Persistent Volume is nothing but a local disk or a directory that is available on a Kubernetes cluster node. This local Persistent Volume allows its users to access the local storage by using a local Persistent Volume Claim in a very simple yet portable manner. You can create this local Persistent Volume by using this storage class we just created. Open a file, say persistentVolume.yaml, in your editor:

Give this persistent volume a name, say "my-local-pv":

You can add storage capacity as per your usage while creating a local Persistent Volume. In this tutorial, we will use 5 Gi for the storage:

Add the accessModes, persistentVolumeReclaimPolicy, and provide the storageClassName same as used in storageClass.yaml:

Note: persistentVolumeReclaimPolicy tells you as to what happens to the Persistent Volume once its claim (Persistent Volume Claim) is released. There are three valid options for this parameter: Retain, Delete and Recycle. In our code, we will use the Retain option. For more details, you can check the persistentVolumeReclaimPolicy field here: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#persistentvolumeclaim-v1-core

Add the local.path for your Persistent Volume as below:

Note: Make sure this local path (/mnt/disk/vol) exists on your Kubernetes cluster node.

After adding all the required fields, your persistentVolume.yaml file should look like this:

Note: You must use the right node name of your machine. In this case, it is: “worker.”
Preparing Local Volume

Now, we need to prepare a local volume on the “worker” node as we have added in the persistentVolume.yaml file. Run the below commands on the node that you have configured in persistentVolume. In this case, it is “worker” node:

Note: Make sure you have sufficient permission to create the directory and changing the permission as shown above. If not, run the commands with the right user.

Run the below command on the master node where your persistentVolume.yaml file is present:

You should get the below output:

Since you have successfully created your local storage and Persistent Volume, you can now go ahead and create a Persistent Volume Claim to hold your application code and configuration files.

Step 3: Create the Persistent Volume

Your application code needs to be kept safe while you manage or update your pods. For this, you will use the Persistent Volume, created in the previous step, which is accessed by using a PersistenVolumeClaim, or PVC. This PVC mounts the PV at the required path.

Open a file, say code_volume.yaml, in your editor:

Name your PVC as code by adding the below parameters and values to your file:

The spec section of a PVC has the following items:

  1. accessModes: There are various possible values for this field as follows:
    • ReadWriteOnce – Mounts the volume for a single node with both read and write permissions.
    • ReadOnlyMany – Mounts the volume for many nodes with only read permission.
    • ReadWriteMany – Mounts the volume for many nodes with both read and write permissions.
  2. resources: Defines the required storage space.

Since the local storage is mounted only to a single node, you will need to set the accessMode to ReadWriteOnce. In this tutorial you will add only a small chunk of application code, hence 1GB of storage will be sufficient here. However, if you wish to store a larger amount of data or code, you can modify the storage parameter according to your requirements. Note that once the volume is created, you will be able to increase the storage size. However, decreasing it is not supported:

Now, declare the storage class that the Kubernetes cluster will use to allocate to the volumes. Use the my-local-storage storage class, created in the previous step, here for your storageClassName:

After completing the above steps, your code_volume.yaml file should look like this:

Now save and exit the file.

Creating PVC

Create the code PVC by running the kubectl apply command:

You should get the following output that indicates that the object was successfully created and ready to be able to mount your 1GB PVC as volume:

You can execute the following command to check the available Persistent Volume (PV):

The output of the above command should be as follows:

All the above fields, except for Reclaim Policy and Status, are an overview of your configuration file. The Reclaim Policy defines what happens to the PV once the PVC accessing it is deleted. The value Delete removes the PV from the Kubernetes cluster as well as from the storage infrastructure. You can refer to the Kubernetes PV documentation to have a clear understanding of Reclaim Policy and Status.

You can now create your pods using a Deployment as you have successfully created your Persistent Volume using the local storage.

Step 4: Create Deployment for your PHP-FPM Application

This step will help you create your PHP-FPM pod using Deployment. Deployment uses ReplicaSets to provide a stable way to create, update, and manage your pods. A Deployment automatically rolls back its pods to a previous image.

The spec.selector key in the Deployment lists all the labels of the pods it manages. It also uses the template key to create the pods that are required.

In this step, we will also introduce the application of Init Containers. The Init Containers run few commands before the regular containers that are specified under the pod’s template. Here, the Init Container will use GitHub Gist (https://gist.github.com/) to get a sample index.php file. The contents of the sample file are:

Creating PHP Deployment

Open a new file named php_deployment.yaml in your editor to create your Deployment:

Now, name the Deployment object PHP as this Deployment will manage your PHP-FPM pods. Add the label tier: backend because the pod will belong to the backend tier:

Using the replica parameter, specify the number of copies of this pod that should be created. The number of replicas may vary based on your requirements and the available resources. In this tutorial, you will create only one replica of your pod:

Add app: php and tier:backend labels under selector key that denotes that this Deployment will manage pods that match these two labels:

Now, your pod’s object definition needs a template under your Deployment spec. This template defines the specification that is needed to create your pod. To start with, add the labels that were specified for the php service selector and the matchLabels of the Deployment. Then add app:php and tier:backend under template.metadata.labels:

Note: A pod can have multiple containers or volumes and each of those will need a different name to be able to differentiate them.  You can specify a mount path for each volume to selectively mount that volume to a container.

First, you need to specify all the volumes that your containers will access. Name this volume code as you had created a PVC named code to hold your application code:

Next, specify the container name along with the image that you want to run inside your pod. There are various images available on the Docker store (https://hub.docker.com/explore/), but in this tutorial, we will use the php:7-fpm image:

Now, mount the volumes to which the container requires access.  Since this container will run your php code, it will need access to the code volume created in the previous step. In this step, you will also learn how to copy your application code using Init Container.

Note: You can either use a single initContainer to run a script that builds your application, or you can use one initContainer per command, depending on the complexity of your setup process. You need to make sure that the volumes are mounted to the initContainer.

To download the code, this tutorial will guide you on how to use a single Init Container with busybox. Busybox is a small container with wget utility that you will use to achieve this.

First, add your initContainer under spec.template.spec and specify the busybox image:

Then, in order to download the code in the code volume, your Init Container will need access to it. Mount the volume code at the /code path under spec.template.spec.initContainers:

Every Init Container requires to run a command. This Init Container will use wget to download the code from Github into the /code directory. You can pass a -O option to give this downloaded file a name, and you can name this file index.php.

Note: Make sure you trust the code you are pulling using the Init Container into your server. You can inspect the source code and ensure that you are comfortable with what it does.

In addition, add the below lines under install container in spec.template.spec.initContainers:

After you complete all these steps, your php_deployment.yaml file should look like this:

You can now save the file and exit. Next, create your PHP-FPM Deployment using kubectl apply command:

Successful creation of the Deployment should give you the below output:

This Deployment starts by downloading the specified images, then it will request the PersistentVolume from your PersistentVolumeClaim, and then run your initContainers. Once this step is done, the containers will run and mount the volumes to the specified mount point. After completing all these steps your pod will be up and running.

You can run the below command to view your Deployment:

After running the above command, you should get the below output:

You can understand the current state of the Deployment with the help of this output. A Deployment is a controller that maintains the desired state. The DESIRED field specifies that it has 1 replica of the pod named php. The CURRENT field indicates how many replicas of the DESIRED state are running at present. For a healthy pod, this should match the DESIRED state. You can learn more about the remaining fields on the Kubernetes Deployments Documentation.

After that, to check the status of your running pod, you can run the below command:

The output of this command can vary depending on the time that has passed since you created your Deployment. If it is run shortly after creating the Deployment, the output will be similar to:

Explanation:

These columns represent the information as below:

  • Ready: The number of current/desired replicas running this pod.
  • Status: The status of your pod. Init:0/1 indicates that the Init Containers are running and 0 out of 1 Init Containers have finished running.
  • Restarts: This indicated the number of times this process has restarted to start the pod.

Your pod can take a few minutes for the status to change to podInitializing depending on the complexity of your startup scripts:

This indicates that the Init Containers have run successfully and now, the containers are initializing:

As you can see now, your pod is up and running. However, in case your pod does not start, you can run the below commands for debugging purposes:

1. To view detailed information of the pod:

2.  To view the logs of the pod:

Note: There are multiple options available for “kubectl logs” command, you can run “kubectl logs –help” command to explore more on this.

3. To view the logs of a specific container in the pod:

Congratulations! You have successfully mounted the application code and the PHP-FPM service is ready to handle connections. Similarly, you can create your Nginx Deployment.

Step 5: Create your Nginx Deployment

This step will guide you on how to configure Nginx using a ConfigMap. A ConfigMap keeps all your required configurations in a key-value format that will be used in other Kubernetes object definitions. With this approach, you will have the flexibility to reuse or swap the Nginx image with a different version, as and when required. You can update the ConfigMap and it will automatically replicate those changes to any pod that is mounting this ConfigMap.

To begin with, open a nginx_configmap.yaml file in your editor:

Now, name this ConfigMap as nginx-config and add it to tier: backend microservice:

In addition, you can add the data to ConfigMap. Add a key named config and add all the Nginx configuration file contents as the value.

Since it is possible for Kubernetes to route requests to the respective hosts for a service, you can enter the name of your PHP-FPM service under the fastcgi_pass parameter instead of its IP address. Add the following lines of code to your nginx_configMap.yaml file:

Once completed, your nginx_configMap.yaml file will look like this:

You can now save and exit the editor. Now execute the kubectl apply the command to create the ConfigMap:

After that, you should see the below output on your screen:

You have successfully created your Nginx Configmap. Now you can create your Nginx Deployment.

Creating Nginx Deployment

To begin with, you can create a new file named nginx_deployment.yaml in the editor:

Name this Deployment nginx and add the tier: backend label to it:

After that, specify the replica count by adding replica field in the Deployment spec and add app: nginx and tier: backend labels to it:

Similarly, add the pod template. Make sure you add the same labels that you had added in the Deployment’s selector.matchLabels. You can add the following:

Give Nginx access to the code PVC that was created earlier by adding the following parameters under spec.template.spec.volumes:

Note: A pod can mount the ConfigMap as a volume. By specifying the file name and key, we will create a file with its value as the content. To use this ConfigMap, set the path to the name of the file that holds the contents of the key. You can create a file site.conf from the key config. Add the following under spec.template.spec.volumes:
Warning: The contents of the key will replace the volume’s mountPath if a file is not specified. In other words, you will lose all the contents in the destination folder if a path is not explicitly specified.

Now, specify the name, image, and port that you want to use in your pod. Here, we will use nginx:1.7.9 image and port 80. Add them under spec.template.spec section:

Also, mount the code volume at /code as both Nginx and PHP-FPM will need to access the file at the same path:

The nginx-1.7.9 image automatically loads any configuration file under /etc/nginx/conf.d folder. Now, if we mount the config volume in this directory, it will create /etc/nginx/conf.d/site.conf. Add the following under volumeMount section:

After completing all the above step, your nginx_deployment.yaml file should look like this:

You can now save and exit the file and create Nginx Deployment by running the following command:

On successful execution of the command, you should see the following output:

You can list all your Deployments by executing the below commands:

You should now see both the Nginx and PHP-FPM Deployments:

kubernetes deployment status

Further, you can execute the following command to list the pods that are managed by both the Deployments listed above:

You will see that both your pods are up and running like following:

kubernetes pod status

Since all your Kubernetes objects are active at this point, you can now access the Nginx service on your browser.

Run the following command to list the services:

Note down the External IP of your Nginx service:external ip of nginx Deploy a PHP Application on Kubernetes Cluster with Ubuntu 18.04

Now, using this External IP of Nginx service, you can visit your server by typing http://your_public_ip on your browser. You should be able to see the output of php_info() that confirms that your Kubernetes services are up and running.

Conclusion

In this tutorial, to manage your PHP-FPM and Nginx services independently, you containerized the two services. By doing so, not only you will improve the scalability of your project, but you will also use your resources efficiently. You also learned how to create local storage and a Persistent Volume to store your application code on a volume and be able to easily update your services in the future. By doing so, you improved the usability and maintainability of your code.

Furthermore, take a look at our other tutorials focusing on Docker and Kubernetes that you can find on our blog:

Happy Computing!