Steps to create an NFS Mount for pods

Steps to create an NFS Mount for pods Perform the below steps on NFS Server: In our case, the NFS is mounted on another different server. We have created a directory /MTCache and made the entry of the same in /etc/exports. The  /etc/exports  file on the NFS server contains the accessible NFS directories. The entry in /etc/exports on the NFS hosting server will be: /MTCache *(rw,sync,no_root_squash) NFS Options: Some options we can use in “/etc/exports” file for file sharing is as follows. 1.      ro: With the help of this option we can provide read only access to the shared files i.e client will only be able to read. 2.      rw: This option allows the client server to both read and write access within the shared directory. 3.      sync: Sync confirms requests to the shared directory only once the changes have been committed. 4.      no_subtree_check: This option prevents the subtree checking. When a shared directory is the subdirectory of a larger file system,

SS command in Linux - more powerful than netstat

Today i want to share a command ss (Socket statistics) to investigate network and debug tcp connections. ss  is  used to dump socket statistics. It allows showing information similar to netstat.  It can display more TCP and state information than other tools. It is present in most of our Linux machines. ss commands has lot of options. To get all options: ss -help Few commands and outputs worth trying are as below: ss -t ->stands for tcp. Gives information about tcp connections that are available on the system #ss -t State       Recv-Q Send-Q                                Local Address:Port                                                 Peer Address:Port ESTAB       0      352                                                                ESTAB       0      0                                                                 ESTAB   

Initial basic checks before proceeding to DB tunings in MSSQL

Initial basic checks before proceeding to DB tunings in MSSQL: Check your SQL Server environment before tuning for performance Sometimes when we have issues with slow response for our DB queries, we think of tuning the query assuming the SQL Server is tuned and it may be problem with queries. But it is observed that it is better to make a common practice to check the basic configuration settings of the SQL server before we start to analyze the queries deeper. Today i will try to touch base some of those settings which we always need to check for proper settings of the SQL server Environment. Check 1: Database and Transaction log files on separate drives To obtain optimal SQL performance, it is recommended to separate the data and the log files onto separate physical drives. Placing both data AND log files on the same device can cause contention for that device, resulting in poor performance. Placing the files on separate drives allows the I/O activity to occur at the sa

Linux tuning while running huge load using Jmeter

While running our jmeter load tests on Unix box with a target of 2500 User Load we got the exception “Non HTTP response code:,Non HTTP response message: Cannot assign requested address” in jmeter log We have set open files limit to 50000 and ran the test, but still we were getting the errors. $ ulimit -a core file size          (blocks, -c) 0 data seg size           (kbytes, -d) unlimited scheduling priority             (-e) 0 file size               (blocks, -f) unlimited pending signals                 (-i) 31182 max locked memory       (kbytes, -l) 64 max memory size         (kbytes, -m) unlimited open files                      (-n) 50000 pipe size            (512 bytes, -p) 8 POSIX message queues     (bytes, -q) 819200 real-time priority              (-r) 0 stack size              (kbytes, -s) 8192 cpu time               (seconds, -t) unlimited max user processes              (-u) 4096 virtual memory         

Starting Perfmon on all Windows machines with a single Batch file

In one of our project we have more than 4 Windows servers. We use perfmon to monitor the resource utilization. Each time we run load test we start the perfmon on all the servers. So created the below batch file to do the job. The batch file does the below: Create a directory with current date and time Start perfmon on all 4 AR servers Wait for 1 hr Copy the result file (.csv) from each server into the director created in step1. Stop perfmon on all 4 AR servers. @echo off for /f %%I in ('wmic os get localdatetime ^|find "20"') do set dt=%%I REM dt format is now YYYYMMDDhhmmss... REM set dt=%dt:~4,2%-%dt:~2,2%-%dt:~0,4% set dt=%dt:~6,2%%dt:~4,2%%dt:~2,2%%dt:~8,2%%dt:~10,2% echo %dt% mkdir D:\Perfmon\Perfmon_%dt% logman start "Counter" -s server1 logman start "Counter" -s server2 logman start "Counter" -s server3 logman start "Counter" -s server4 timeout /t 3600 copy \\server1\c

Beaware!!! with NewRatio while using Concurrent Mark Sweep GC as Garbage Collector

In Sun JDK there were few bugs previously with subject “ JDK-6872335 : NewRatio ignored when UseConcMarkSweepGC set”. Although these bugs are fixed, but IMO the bug still exists for default settings of NewRatio when UseConcMarkSweepGC is used. We still have the bug where the default NewRatio=2 is not considered along with UseConcMarkSweepGC Option #1 : Default GC (-XX:+UseParallelGC) : PSYoungGen      total 1835008K = 1.75GB (As it considered the default NewRatio=2) # /usr/bin/java -server -Xms6144m -Xmx6144m -XX:MaxMetaspaceSize=256m -XX:+PrintCommandLineFlags -XX:+PrintGCDetails -version -XX:InitialHeapSize=6442450944 -XX:MaxHeapSize=6442450944 -XX:MaxMetaspaceSize=268435456 -XX:+PrintCommandLineFlags -XX:+PrintGCDetails -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseParallelGC java version "1.8.0_60" Java(TM) SE Runtime Environment