Skip to main content

Steps to create an NFS Mount for pods

Steps to create an NFS Mount for pods

Perform the below steps on NFS Server:
In our case, the NFS is mounted on another different server. We have created a directory /MTCache and made the entry of the same in /etc/exports. The /etc/exports file on the NFS server contains the accessible NFS directories.
The entry in /etc/exports on the NFS hosting server will be:
/MTCache *(rw,sync,no_root_squash)

NFS Options:
Some options we can use in “/etc/exports” file for file sharing is as follows.
1.     ro: With the help of this option we can provide read only access to the shared files i.e client will only be able to read.
2.     rw: This option allows the client server to both read and write access within the shared directory.
3.     sync: Sync confirms requests to the shared directory only once the changes have been committed.
4.     no_subtree_check: This option prevents the subtree checking. When a shared directory is the subdirectory of a larger file system, nfs performs scans of every directory above it, in order to verify its permissions and details. Disabling the subtree check may increase the reliability of NFS, but reduce security.
5.     no_root_squash: This phrase allows root to connect to the designated directory.
For more options with “/etc/exports“, you are recommended to read the man pages for export.

Update your export file by running below command:
#exportfs -a

Perform below steps on Master node of the OpenShift Cluster:
Persistent volumes (PVs) and persistent volume claims (PVCs) provide a convenient method for sharing a volume across a project.
Create two files nfs-pv.yaml and nfs-pvc.yaml.

Creating the Persistent Volume (nfs-pv.yaml)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mtcachedir
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /MTCache
    server: nfs.server
    readOnly: false

Highlighted values are changes and has below significance:

name: mtcachedir  : The name of the PV. This is the PV identity in various oc commands.
storage: 2Gi  : The amount of storage allocated to this volume.
ReadWriteMany  : accessModes are used as labels to match a PV and a PVC. Currently, no access rules are enforced based on the accessModes..
Retain  : The volume reclaim policy Retain indicates that the volume will be preserved after the pods accessing it terminates. Valid Options are Retain (default) and Recycle.
nfs: : This defines the volume type being used, in this case the nfs plug-in.
/MTCache  : This is the NFS mount path that is exported by the NFS Server.
nfs.server  : This can be hostname or IP address of the NFS server.

Save the file with the changes and create the PV with below command:
# oc create -f nfs-pv.yaml

Verify that the PV was created:
# oc get pv

NAME            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                     STORAGECLASS   REASON    AGE
mtcachedir      2Gi                  RWX                Retain           Bound                         drop/mtcachedir                      46m

Next step can be to create a PVC (nfs-pvc.yaml), which binds to the new PV:
A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mtcachedir
spec:
  accessModes:
  - ReadWriteMany
  resources:
     requests:
       storage: 2Gi
mtcachedir : sThis name is referenced by the pod under its volume section.
ReadWriteMany : Same as mentioned in PV.
2Gi : This claim looks for PVs offering 2Gi or greater capacity.

Save the file with the changes and create the PVC with below command:
# oc create -f nfs-pvc.yaml

Verify that the PVC was created:
# oc get pvc
NAME            STATUS    VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mtcachedir      Bound     mtcachedir      2Gi                          RWX                                                   56m

Creating the pod:
The main areas in the Pod Object Definition are as below:
 volumeMounts:
    enabled: true
    items:
      - name: mtcachedir
        mountPath: /usr/bin/cache
volumes:
  enabled: true
  items:
    - name: mtcachedir
      persistentVolumeClaim:
        claimName: mtcachedir
mountPath is the path as seen in the containers.
claimName is the name of PVC created above.

Save the pod file (nfs.yaml) and create the pod:
# oc create -f nfs.yaml

Pod should be up and running successfully.

Few errors which may be faced due are as below:
1.     Warning  FailedMount            1m                kubelet, <server> Unable to mount volumes for pod "pod-user-0_drop(19240589-fe0b-11e8-a13c-005056a61c68)": timeout expired waiting for volumes to attach/mount for pod "drop"/"pod-user-0". list of unattached/unmounted volumes=[mtcachedir]
Sol: The above error may occur if the PV and PVC are not created. Create PV & PVC as suggested above and check.

2.     mount.nfs: access denied by server while mounting nfs_server:/MTCache
you might have added the path in /etc/exports, but did not update the export with command:
#exportfs -a

The above error can further be confirmed by checking /var/log/messages. It may have the
rpc.mountd[7792]: refused mount request from nfs_client for /MTCache (/): not exported


Comments

  1. Good information about NFS Mount for pods it is useful so much
    Best Play and Pre School for kids in Hyderabad,India. To give your kid a best environment and learning it is the right way to join in play and pre school were kids can build there physically, emotionally and mentally skills developed. We provide programs to kids like Play Group, Nursery, Sanjary Junior, Sanjary Senior and Teacher training Program.
    ­play school in hyderabad

    ReplyDelete
  2. Thanks for sharing the updating information it will useful and helpful

    Pressure Vessel Design Course is one of the courses offered by Sanjary Academy in Hyderabad. We have offer professional Engineering Course like Piping Design Course,QA / QC Course,document Controller course,pressure Vessel Design Course,Welding Inspector Course, Quality Management Course, #Safety officer course.
    Document Controller course
    Pressure Vessel Design Course
    Welding Inspector Course
    Safety officer course
    Quality Management Course
    Quality Management Course in India

    ReplyDelete
  3. Great blog article informative I liked it

    Best QA / QC Course in India, Hyderabad. sanjaryacademy is a well-known institute. We have offer professional Engineering Course like Piping Design Course, QA / QC Course,document Controller course,pressure Vessel Design Course, Welding Inspector Course, Quality Management Course, #Safety officer course.
    QA / QC Course
    QA / QC Course in India
    QA / QC Course in Hyderabad

    ReplyDelete

Post a Comment

Popular posts from this blog

Steps to Analyze AWR Report in Oracle

Steps to Analyze AWR Report in Oracle
AWR -Automaticworkload repository is a collection of persistentsystem performancestatisticsowned by SYS. It resides in SYSAUXtablespace. Bydefault snapshot are generated once every 60 min and maintained for 7 days. Each snapshot has a unique ID know as "snap_id". Snapshot detail can be found in "dba_hist_snapshot" view.
If we have Database performance issue and not the Database machine, then AWR Report is the place to look at. AWR is not used for real-time performance monitoring like the v$ tables. It is used for historical analysis of performance. AWR complements, but doesnot replace real-time monitoring.
Once AWR Report is generated in Oracle, the next task is to analyze it. By going through the AWR Report we can easily solve issues like slow database, high wait events, slow query and many more issues. Even though the report is lengthy, Analyzing or Reading relevant part of AWR Report can help to troubleshoot issues in easy and …

Using SQL Developer to create and view Tablespaces

Below are the steps Create and View Table Spaces settings using SQL Developer.
Required SQL developer version is version 3.0
To Create TableSpace :
Click on Menu View/DBA - DBA navigator window will appear.
In the DBA window add a new connection to the DB, and click connect. Then under storage option right click on Tablespaces and choose New Tablespace to create new one. Fill the Details as shown below:


To View the created table spaces: Under Storage, Select Data Files:
It will display the below: