Initial basic checks before proceeding to DB tunings in MSSQL
Initial basic checks before proceeding to DB tunings in MSSQL:
Check
your SQL Server environment before tuning for performance
Sometimes
when we have issues with slow response for our DB queries, we think of tuning
the query assuming the SQL Server is tuned and it may be problem with queries.
But it is observed that it is better to make a common practice to check the
basic configuration settings of the SQL server before we start to analyze the
queries deeper. Today i will try to touch base some of those settings which we
always need to check for proper settings of the SQL server Environment.
Check
1: Database and Transaction log files on separate drives
To
obtain optimal SQL performance, it is recommended to separate the data and the
log files onto separate physical drives. Placing both data AND log files on the
same device can cause contention for that device, resulting in poor
performance. Placing the files on separate drives allows the I/O activity to
occur at the same time for both the data and log files. Read I/O for database
files is typically random when reading database pages while I/O for the
transaction log is typically sequential.
We
have observed huge difference in data reconciliation batch jobs after
separating database and transaction log files on separate drives.
As
in below snap, the Data files are located on D:\ drive and Log files are on E:\
drive.
Check
2: Check server Memory settings
Check
for the memory settings, were they changed from default values? The default
setting for min server memory is 0 and for max server memory it is
2,147,483,647 megabytes (MB) = 2.14 Petabyte. By default, SQL Server can change
its memory requirements dynamically based on available system resources.
In
case these values are changed from default to restrict on consumption, analyze
further whether the server is dedicated to Database or shared with any other
application? Is the DB server hosting single instance or multiple instances?
More details to set can be found@
Server
memory options (default values) as below:
We
observed in one of our customer issues, where the customer unknowingly changed
the default values to restrict and observed high response times in query
executions. When the settings were changed to default the issue got resolved.
Check
3: Check Log Files
Always
leave enough free space and drive of the transaction log file is not overloaded
for your normal operations to occur with required IOPS and low latency without
forcing an auto-grow operation. Transaction log file contains log records
produced during the logging process in a SQL Server database. The transaction
log is the most important component of a SQL Server database when it comes to
the disaster recovery.
Auto-grow even is one of the biggest performance intensive operations. Growing the
log file can be extremely time-consuming. It can force any DML operations to
wait until the growth is complete before the operation can proceed. By making
sure that we have plenty of free space on the drive of the transaction log
file, you can avoid these performance hits altogether.
Check
4: Check for parallelism settings
SQL Server can take advantage of a server's
multiple processors. It's able to take some operations and to spread the processing
across the processors available to it, with the goal of dividing the work into
smaller chunks so that the overall operation performs quicker. There are a
couple of instance-wide settings that determine if, or when, the optimizer
might generate "parallel execution plans."
Max degree of parallelism, which determines the
maximum number of processors that SQL Server can use when executing a parallel
query; by default this is set to "0,"which means that all available
processors can potentially be used to execute a query.
Cost threshold for parallelism, which specifies the
threshold, or minimum cost, at which SQL Server creates and runs parallel
plans; this cost is an estimated number of seconds in which the query will run,
and the default value is "5." In other words, if the query optimizer
determines that a query will take less than 5 seconds to execute, then it won't
consider using parallelism.
Note: There is no hard and fast rule for determining
when parallelism may be useful, or when it will be costlier. The best approach
is to observe the execution times and wait states of queries that use
parallelism, and where necessary, either change the system settings to increase
the cost threshold.
Example
of settings done in our Database, after some many test we found these setting appropriate to our Environment.
Note: This is a dedicated DB server with 20 core CPU.
Check
5: Check for Parameterization:
SQL
Server provides two types of query parameterization: “Simple” and “Forced”. The
“Parameterization” property controls the way SQL Server handles literals in
query predicates. In some cases, SQL Server might decide to replace a literal
value with a parameter during query optimization.
The
default value is “Simple”. With “Simple”, SQL Server will only parameterize
queries when it is certain that there is only one plan for the query regardless
of the value of the literal being parameterized.
When
set to FORCED PARAMETERIZATION, any literal value that appears in a SELECT,
INSERT, UPDATE or DELETE statement submitted in any form is converted to a
parameter during query compilation.
We
set the Parameterization to SIMPLE under Miscellaneous setting in Options.
http://www.tjmaher.com/2019/02/check-out-what-ministry-of-testing.html
ReplyDelete