Skip to main content

Initial basic checks before proceeding to DB tunings in MSSQL

Initial basic checks before proceeding to DB tunings in MSSQL:

Check your SQL Server environment before tuning for performance

Sometimes when we have issues with slow response for our DB queries, we think of tuning the query assuming the SQL Server is tuned and it may be problem with queries. But it is observed that it is better to make a common practice to check the basic configuration settings of the SQL server before we start to analyze the queries deeper. Today i will try to touch base some of those settings which we always need to check for proper settings of the SQL server Environment.

Check 1: Database and Transaction log files on separate drives
To obtain optimal SQL performance, it is recommended to separate the data and the log files onto separate physical drives. Placing both data AND log files on the same device can cause contention for that device, resulting in poor performance. Placing the files on separate drives allows the I/O activity to occur at the same time for both the data and log files. Read I/O for database files is typically random when reading database pages while I/O for the transaction log is typically sequential.

We have observed huge difference in data reconciliation batch jobs after separating database and transaction log files on separate drives.

As in below snap, the Data files are located on D:\ drive and Log files are on E:\ drive.

Check 2: Check server Memory settings
Check for the memory settings, were they changed from default values? The default setting for min server memory is 0 and for max server memory it is 2,147,483,647 megabytes (MB) = 2.14 Petabyte. By default, SQL Server can change its memory requirements dynamically based on available system resources.

In case these values are changed from default to restrict on consumption, analyze further whether the server is dedicated to Database or shared with any other application? Is the DB server hosting single instance or multiple instances? More details to set can be found@
Server memory options (default values) as below:

We observed in one of our customer issues, where the customer unknowingly changed the default values to restrict and observed high response times in query executions. When the settings were changed to default the issue got resolved.

Check 3: Check Log Files
Always leave enough free space and drive of the transaction log file is not overloaded for your normal operations to occur with required IOPS and low latency without forcing an auto-grow operation. Transaction log file contains log records produced during the logging process in a SQL Server database. The transaction log is the most important component of a SQL Server database when it comes to the disaster recovery.
Auto-grow even is one of the biggest performance intensive operations. Growing the log file can be extremely time-consuming. It can force any DML operations to wait until the growth is complete before the operation can proceed. By making sure that we have plenty of free space on the drive of the transaction log file, you can avoid these performance hits altogether.

Check 4: Check for parallelism settings
SQL Server can take advantage of a server's multiple processors. It's able to take some operations and to spread the processing across the processors available to it, with the goal of dividing the work into smaller chunks so that the overall operation performs quicker. There are a couple of instance-wide settings that determine if, or when, the optimizer might generate "parallel execution plans."

Max degree of parallelism, which determines the maximum number of processors that SQL Server can use when executing a parallel query; by default this is set to "0,"which means that all available processors can potentially be used to execute a query.

Cost threshold for parallelism, which specifies the threshold, or minimum cost, at which SQL Server creates and runs parallel plans; this cost is an estimated number of seconds in which the query will run, and the default value is "5." In other words, if the query optimizer determines that a query will take less than 5 seconds to execute, then it won't consider using parallelism.

Note: There is no hard and fast rule for determining when parallelism may be useful, or when it will be costlier. The best approach is to observe the execution times and wait states of queries that use parallelism, and where necessary, either change the system settings to increase the cost threshold.

Example of settings done in our Database, after some many test we found these setting appropriate to our Environment. 
Note: This is a dedicated DB server with 20 core CPU. 

Check 5: Check for Parameterization:
SQL Server provides two types of query parameterization: “Simple” and “Forced”. The “Parameterization” property controls the way SQL Server handles literals in query predicates. In some cases, SQL Server might decide to replace a literal value with a parameter during query optimization.
The default value is “Simple”. With “Simple”, SQL Server will only parameterize queries when it is certain that there is only one plan for the query regardless of the value of the literal being parameterized.
When set to FORCED PARAMETERIZATION, any literal value that appears in a SELECT, INSERT, UPDATE or DELETE statement submitted in any form is converted to a parameter during query compilation. 

We set the Parameterization to SIMPLE under Miscellaneous setting in Options.



  2. Good Information
    We are the best piping design course in Hyderabad, India. Sanjary academy Offers Piping Design Course and Best Piping Design Training Institute in Hyderabad. Piping Design Institute in India Piping Design Engineering.
    Piping Design Course
    Piping Design Course in india
    Piping Design Course in hyderabad

  3. Excellent information for this blog
    "Sanjary Academy provides excellent training for Piping design course. Best Piping Design Training Institute in Hyderabad,
    Telangana. We have offer professional Engineering Course like Piping Design Course,QA / QC Course,document Controller
    course,pressure Vessel Design Course, Welding Inspector Course, Quality Management Course, #Safety officer course."
    Piping Design Course
    Piping Design Course in India­
    Piping Design Course in Hyderabad
    QA / QC Course
    QA / QC Course in india
    QA / QC Course in Hyderabad
    Document Controller course
    Pressure Vessel Design Course
    Welding Inspector Course
    Quality Management Course
    Quality Management Course in india
    Safety officer course


Post a Comment

Popular posts from this blog

Steps to Analyze AWR Report in Oracle

Steps to Analyze AWR Report in Oracle AWR -   Automatic   workload repository is a collection of persistent   system performance   statistics   owned by SYS. It resides in SYSAUX   tablespace . By default snapshot are generated once every 60 min and maintained for 7 days . Each snapshot has a unique ID know as "snap_id". Snapshot detail can be found in " dba_hist_snapshot " view.   If we have Database performance issue and not the Database machine, then AWR Report is the place to look at. AWR is not used for real-time performance monitoring like the v$ tables. It is used for historical analysis of performance. AWR complements, but doesnot replace real-time monitoring. Once AWR Report is generated in Oracle, the next task is to analyze it. By going through the AWR Report we can easily solve issues like slow database, high wait events, slow query and many more issues. Even though the report is lengthy, Analyzing or Reading relevant part of AWR Report can

Vmstat Output explained

vmstat : vmstat is a tool in Unix/Linux which is used to Report virtual memory statistics. It shows how much virtual memory there is, how much is free and paging activity. Most important, you can observe page-ins and page-outs as they happen.   vmstat   reports   information about processes, memory, paging, block IO, traps, and cpu activity.   > vmstat procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----   r   b    swpd    free    buff   cache         si    so     bi     bo    in    cs us sy id wa st   0   0   21688 12797396 850716 15372668     0     0      3     35     0     0   1   0 99   0   0   0   0   21688 12789292 850716 15372668     0     0      0   1355   883   744   1   0 99   0   0   1   0   21688 12784648 850716 15372676     0     0      0     15 1934 1292   6   0 93   0   0   2   0   21688 12781356 850716 15372736     0     0      0     16 2222 2094   8   1 92   0   0   0   0   21688 12809992 850716 15376880     0

Recording Jmeter Scripts behind a proxy server

Recording Jmeter Scripts behind a proxy server In one of the blog we saw the steps of recording Jmeter scripts using a proxy server. JMeter has its own in-built Proxy Server, the   HTTP(S) Test Script Recorder . This is only used for recording HTTP or HTTPS browser sessions as explained in below link. The above steps will work properly if the internet connection is not behind a proxy and can be accessed directly. If we are testing from behind a firewall/proxy server or if we the internet access is provided using proxy server then the above steps will fail and will not record any of the steps. Steps to record in Jmeter if the internet is accessed via proxy: 1.       Start Jmeter with the firewall/proxy server hostname and port number. To do so, Open a command prompt in windows and run the jmeter.bat file from a command line with the following parameters: -H [proxy