Steps to create an NFS Mount for pods

Steps to create an NFS Mount for pods

Perform the below steps on NFS Server:
In our case, the NFS is mounted on another different server. We have created a directory /MTCache and made the entry of the same in /etc/exports. The /etc/exports file on the NFS server contains the accessible NFS directories.
The entry in /etc/exports on the NFS hosting server will be:
/MTCache *(rw,sync,no_root_squash)

NFS Options:
Some options we can use in “/etc/exports” file for file sharing is as follows.
1.     ro: With the help of this option we can provide read only access to the shared files i.e client will only be able to read.
2.     rw: This option allows the client server to both read and write access within the shared directory.
3.     sync: Sync confirms requests to the shared directory only once the changes have been committed.
4.     no_subtree_check: This option prevents the subtree checking. When a shared directory is the subdirectory of a larger file system, nfs performs scans of every directory above it, in order to verify its permissions and details. Disabling the subtree check may increase the reliability of NFS, but reduce security.
5.     no_root_squash: This phrase allows root to connect to the designated directory.
For more options with “/etc/exports“, you are recommended to read the man pages for export.

Update your export file by running below command:
#exportfs -a

Perform below steps on Master node of the OpenShift Cluster:
Persistent volumes (PVs) and persistent volume claims (PVCs) provide a convenient method for sharing a volume across a project.
Create two files nfs-pv.yaml and nfs-pvc.yaml.

Creating the Persistent Volume (nfs-pv.yaml)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mtcachedir
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /MTCache
    server: nfs.server
    readOnly: false

Highlighted values are changes and has below significance:

name: mtcachedir  : The name of the PV. This is the PV identity in various oc commands.
storage: 2Gi  : The amount of storage allocated to this volume.
ReadWriteMany  : accessModes are used as labels to match a PV and a PVC. Currently, no access rules are enforced based on the accessModes..
Retain  : The volume reclaim policy Retain indicates that the volume will be preserved after the pods accessing it terminates. Valid Options are Retain (default) and Recycle.
nfs: : This defines the volume type being used, in this case the nfs plug-in.
/MTCache  : This is the NFS mount path that is exported by the NFS Server.
nfs.server  : This can be hostname or IP address of the NFS server.

Save the file with the changes and create the PV with below command:
# oc create -f nfs-pv.yaml

Verify that the PV was created:
# oc get pv

NAME            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                     STORAGECLASS   REASON    AGE
mtcachedir      2Gi                  RWX                Retain           Bound                         drop/mtcachedir                      46m

Next step can be to create a PVC (nfs-pvc.yaml), which binds to the new PV:
A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mtcachedir
spec:
  accessModes:
  - ReadWriteMany
  resources:
     requests:
       storage: 2Gi
mtcachedir : sThis name is referenced by the pod under its volume section.
ReadWriteMany : Same as mentioned in PV.
2Gi : This claim looks for PVs offering 2Gi or greater capacity.

Save the file with the changes and create the PVC with below command:
# oc create -f nfs-pvc.yaml

Verify that the PVC was created:
# oc get pvc
NAME            STATUS    VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mtcachedir      Bound     mtcachedir      2Gi                          RWX                                                   56m

Creating the pod:
The main areas in the Pod Object Definition are as below:
 volumeMounts:
    enabled: true
    items:
      - name: mtcachedir
        mountPath: /usr/bin/cache
volumes:
  enabled: true
  items:
    - name: mtcachedir
      persistentVolumeClaim:
        claimName: mtcachedir
mountPath is the path as seen in the containers.
claimName is the name of PVC created above.

Save the pod file (nfs.yaml) and create the pod:
# oc create -f nfs.yaml

Pod should be up and running successfully.

Few errors which may be faced due are as below:
1.     Warning  FailedMount            1m                kubelet, <server> Unable to mount volumes for pod "pod-user-0_drop(19240589-fe0b-11e8-a13c-005056a61c68)": timeout expired waiting for volumes to attach/mount for pod "drop"/"pod-user-0". list of unattached/unmounted volumes=[mtcachedir]
Sol: The above error may occur if the PV and PVC are not created. Create PV & PVC as suggested above and check.

2.     mount.nfs: access denied by server while mounting nfs_server:/MTCache
you might have added the path in /etc/exports, but did not update the export with command:
#exportfs -a

The above error can further be confirmed by checking /var/log/messages. It may have the
rpc.mountd[7792]: refused mount request from nfs_client for /MTCache (/): not exported


Comments

  1. Good information about NFS Mount for pods it is useful so much
    Best Play and Pre School for kids in Hyderabad,India. To give your kid a best environment and learning it is the right way to join in play and pre school were kids can build there physically, emotionally and mentally skills developed. We provide programs to kids like Play Group, Nursery, Sanjary Junior, Sanjary Senior and Teacher training Program.
    ­play school in hyderabad

    ReplyDelete
  2. Thanks for sharing the updating information it will useful and helpful
    We are the best piping design course in Hyderabad, India. Sanjary academy Offers Piping Design Course and Best Piping Design Training Institute in Hyderabad. Piping Design Institute in India Piping Design Engineering.
    Piping Design Course
    Piping Design Course in india
    Piping Design Course in hyderabad

    ReplyDelete
  3. You should see how my colleague Wesley Virgin's story launches with this shocking and controversial VIDEO.

    You see, Wesley was in the army-and soon after leaving-he discovered hidden, "MIND CONTROL" tactics that the government and others used to get everything they want.

    As it turns out, these are the same secrets many celebrities (notably those who "became famous out of nowhere") and top business people used to become wealthy and successful.

    You probably know how you only use 10% of your brain.

    That's because the majority of your brainpower is UNCONSCIOUS.

    Maybe that thought has even occurred INSIDE your very own head... as it did in my good friend Wesley Virgin's head seven years ago, while driving an unlicensed, garbage bucket of a car with a suspended license and $3 on his bank card.

    "I'm very fed up with living paycheck to paycheck! When will I become successful?"

    You've been a part of those those thoughts, ain't it so?

    Your success story is waiting to be written. Go and take a leap of faith in YOURSELF.

    CLICK HERE To Find Out How To Become A MILLIONAIRE

    ReplyDelete
  4. I needed to leave a little remark to help you and wish you a decent continuation. Wishing you the good luck for all your blogging endeavors.360DigiTMG data science certification

    ReplyDelete
  5. Standard visits recorded here are the simplest strategy to value your vitality, which is the reason why I am heading off to the site regularly, looking for new, fascinating information. Many, bless your heart!data science course in delhi

    ReplyDelete
  6. This is quite charming post you shared, I like the post, an obligation of appreciation is all together for sharing..hrdf training course

    ReplyDelete
  7. This is my first time visit here. From the tremendous measures of comments on your articles.I deduce I am not only one having all the fulfillment legitimately here!
    hrdf claimable training

    ReplyDelete
  8. We are tied directly into the state's renewal database which allows us to process your request almost instantly.
    data science institute in hyderabad

    ReplyDelete
  9. I see some amazingly important and kept up to length of your strength searching for in your on the site
    iot training

    ReplyDelete
  10. wonderful bLog! its intriguing. thankful to you for sharing.
    masters in data science

    ReplyDelete
  11. If you don't mind, then continue this excellent work and expect more from your great blog posts
    https://360digitmg.com/india/iot-course-training-in-noida

    ReplyDelete
  12. Your efforts at strengthening our culture have not gone unnoticed.
    data analytics course

    ReplyDelete
  13. This post is very simple to read and appreciate without leaving any details out. Great work!
    certification of data science

    ReplyDelete
  14. I've read this post and if I could I desire to suggest you some interesting things or suggestions. Perhaps you could write next articles referring to this article. I want to read more things about it!
    data science certification

    ReplyDelete
  15. Excellence blog! Thanks For Sharing, The information provided by you is really a worthy. I read this blog and I got the more information about
    artificial intelligence course in pune

    ReplyDelete
  16. Nice blog..clearly explained…Thankyou so much for your wonderful information…Looking for the best testing and performance tools in Hyderabad contact cyanous software solutions now.

    Best testing and performance tools in Hyderabad
    Best software & web development company in Hyderabad

    ReplyDelete
  17. This blog is sharing lot of information about technology. In this post really a informative one about software testing providers. Software testing services are the important one for software development companies. Keep sharing more post like this.

    ReplyDelete
  18. I curious more interest in some of them hope you will give more information on this topics in your next articles.
    data science training

    ReplyDelete

Post a Comment

Popular posts from this blog

Steps to Analyze AWR Report in Oracle

Vmstat Output explained

Verifications and Error Handling in LoadRunner *Web_reg_find and Web_reg_save_param*