zdroj https://cloud.netapp.com/blog/kubernetes-shared-storage-the-basics-and-a-quick-tutorial
Introduction
Kubernetes, the awesome container orchestration tool is changing the way applications are being developed and deployed. You can specify the required resources you want and have it available without worrying about the underlying infrastructure. Kubernetes is way ahead in terms of high availability, scaling, managing your application, but storage section in the k8s is still evolving. Many storage supports are getting added and are production-ready.
People are preferring clustered applications to store the data. But, what about the non-clustered applications? Where do these applications store data to make it highly available? Considering these questions, let’s go through the Ceph storage and its integration with Kubernetes.
What is Ceph Storage?
Ceph is an open source, software-defined storage maintained by RedHat. It’s capable of the block, object, and file storage. The clusters of Ceph are designed in order to run on any hardware with the help of an algorithm called CRUSH (Controlled Replication Under Scalable Hashing). This algorithm ensures that all the data is properly distributed across the cluster and data quickly without any constraints. Replication, Thin provisioning, Snapshots are the key features of the Ceph storage.
There are good storage solutions like Gluster, Swift but we are going with Ceph for following reasons:
- File, Block, and Object storage in the same wrapper.
- Better transfer speed and lower latency
- Easily accessible storage that can quickly scale up or down
We are going to use 2 types of storage in this blog to integrate with Kubernetes.
Ceph Deployment
Deploying highly available Ceph cluster is pretty straightforward and easy. I am assuming that you are familiar with setting up the Ceph cluster. If not then refer the official document here.
If you check the status, you should see something like:https://medium.com/media/6ac14a1f7cd854b828c34b4b9e227a43
Here notice that my Ceph monitors IPs are 10.0.1.118, 10.0.1.227 and 10.0.1.172
K8s Integration
After setting up the Ceph cluster, we would consume it with Kubernetes. I am assuming that your Kubernetes cluster is up and running. We will be using Ceph-RBD and CephFS as storage in Kubernetes.
Ceph-RBD and Kubernetes
We need a Ceph RBD client to achieve interaction between Kubernetes cluster and CephFS. This client is not in the official kube-controller-manager container so let’s try to create the external storage plugin for Ceph.
- Check the repo here
https://medium.com/media/b65d12f56f2f591ec6262f08c7fbbd6chttps://medium.com/media/d84e860563ee377027d5c59333765491
- You will get output like this:
- Check RBD volume provisioner status and wait till it comes up in running state. You would see something like the following:
- Once the provisioner is up, provisioner needs the admin key for the storage provision. You can run the following command to get the admin key:
- Let’s create a separate Ceph pool for Kubernetes and the new client key:
- Get the auth token which we created in the above command and create kubernetes secret for new client secret for kube pool.
- Now let’s create the storage class.
https://medium.com/media/9103199e02f9bf49943f6b891efeb29fhttps://medium.com/media/414761ead76d04c9423b001c88b4069e
- We are all set now. We can test the Ceph-RBD by creating the PVC. After creating the PVC, PV will get created automatically. Let’s create the PVC now:
https://medium.com/media/0ac38aea05fe17e083cedf2502d3f52ehttps://medium.com/media/35914a1638c566ec559834b9da6e15ed
- If you check pvc, you’ll find it shows that it’s been bounded with the pv which got created by storage class.
- Let’s check the persistent volume
Till now we have seen how to use the block based storage i.e Ceph-RBD with kubernetes by creating the dynamic storage provisioner. Now let’s go through the process for setting up the storage using file system based storage i.e. CephFS.
CephFS and Kubernetes
- Let’s create the provisioner and storage class for the CephFS. Create the dedicated namespace for CephFS
- Create the kubernetes secrete using the Ceph admin auth token
- Create the cluster role, role binding, provisioner
https://medium.com/media/e92bfc6149169126fb84d927d566cf25https://medium.com/media/937a95e69c08dd6dbbd47c957dc9f093
- Create the storage class
https://medium.com/media/9770f5d3f54c6892e1005fa5898f80f0https://medium.com/media/6726614b898ddd6a5aa436882f160fc1
- We are all set now. CephFS provisioner is created. Let’s wait till it gets into running state.
- Once the CephFS provider is up, try creating the persistent volume claim. In this step, storage class will take care of creating the persistent volume dynamically.
https://medium.com/media/f0f2fc0967f95fb6793ee78045b6a668https://medium.com/media/d2e0230ded5199d679363ba316331cf9
- Let’s check the create PV and PVC
Conclusion
We have seen how to integrate the Ceph storage with Kubernetes. In the integration, we covered ceph-rbd and cephfs. This approach is highly useful when your application is not a clustered application and if you are looking to make it highly available.
*****************************************************************
This post was originally published on Velotio Blog.
Velotio Technologies is an outsourced software product development partner for technology startups and enterprises. We specialize in enterprise B2B and SaaS product development with a focus on artificial intelligence and machine learning, DevOps, and test engineering.
Interested in learning more about us? We would love to connect with you on ourWebsite, LinkedIn or Twitter.