Saturday, December 15, 2012

Points to Check when SQL Server Log File Full or SQL Server 9002 Error.

Following are the Points to Check.... 

 1. First check columns log_reuse_wait and log_reuse_wait_desc in sys.dtabases system table.

 2. Then following are the options to mange log file when SQL server is showing 9002 or Log file full error:
      Backing up the log Increase Disk Size Remove unnecessary files from Disk/ freeing disk space
      Kill long Running Unnecessary Transactions
      Add additional Disk
      Attach and Detach Log File
      Shrink Log file with truncate only
      Check for unnecessary transaction or long running queries

Increase the size of the log file Move the log file on different disk space we need to follow above steps as per the Requirement and scenarios...

 Thanks, DB

Saturday, December 8, 2012

Top Most Used Sql Server DMV's

TOP MOST USED SQL SERVER DMV’s(Dynamic Management Views) What are DMVs Dynamic Management Views are views and functions introduced in sql server 2005 for monitoring and tuning sql server performance. Dynamic Management Objects (DMOs) Dynamic Management Views (DMVs) -- can select like a view Dynamic Management Functions(DMFs) --Requires input parameters like a function When and Why use them • Provides information that was not available in previous version of sql server • Provides a simpler way to query the data just like any other view versus using DBCC commands or system stored procedures Types of DMVs • change data capture • common language runtime • database mirroring • database • execution • full-text search • I/O • Index • Object • Query notifications • Replication • Resource governor • SQL Operating System Get a list of all DMOs Permissions 1 select name, type_desc from sys.all_objects where name like 'dm%' order by name Server scoped -- view server state database scoped --view database state Deny takes prescedence deny state or deny select on an object People should have sys admin privileges Grant permissions grant view server state to loginname grant view database state to user deny view server state to loginname deny view database state must create user in master first Specific types of DMVs • database • execution • IO • Index • SQL operatng system Database for page and row count select object_name(object_id) as objname, * from sys.dm_db_partition_stats order by 1 T Execution--- (when sql server is restart everything is reset) sys.dm_exec_sessions -- info about all active user connections and internal tasks sys.dm_exec_connections -- info about connections established sys.dm_exec_requests -- info about each request that is executing (including all system processes) Execution--- Query plans sys.dm_exec_sql_text --returns text of sql batch sys.dm_exec_query_plan --returns showplan in xml select * from sys.dm_exec_query_stats -- returns stats for cached query plans sys.dm_exec_cached_plans --each query plan that is cached Exection -- example select * from dm_exec_connections cross apply sys.dm_exec_sql_text(most_recent_sql_handle) select * from dm_exec_requests cross apply sys.dm_exec_sql_text(sql_handle) Select T.[text],p.[query_plan], s.[program_name],s.host_name, s.client_interface_name, s.login_name, r.* from sys_dm_exec_requests r inner join sys.dm_exec_sessions S ONs.session_id = r.session_id cross apply sql_text cross apply sys.dm_execsql_query_plan select usecounts, cacheobjype, objtype, text from sys.dm_exec_cached_plans cross apply dm_exec_sql_text(plan_handle) where usecounts > 1 order by use cuonts desc IO select * sys.dm_io_pending_io_requests can be run when you think that io can be a bottleneck select * from sys.dm_io_virtual_file_stats (null,null) select db_name(database_id), * from sys.dm_io_virtual_file_stats(null,null) shows io stats for data and log files -- database id and file id -- null returns all data db_name is a funtion to return the name of the actual database rather than database id Index (when sql server is restart everything is reset) sys.dm_dm_db_index_operational_stats (DMF) -- shows io, locking and access information such as inserts, deletes, updates sys.dm_dm_db_index_physical_stats (DMF) -- shows index storage and fragmaentation info, sys.dm_dm_db_index_usage_stats (DMV) -- shows how often indexes are used and for what type of SQL operation Index examples select db_name(dtabase_id), object_name(), * from operation_stats(5,null,null,null) parameters databaseid, objectid, indexid, partition number select db_name(dtabase_id), object_name(), * from physical_stats(DB_ID(N'Northwind'),5,null,null,null, detaled) parameters databaseid, objectid, indexid, partition number, mode Missing indexes sys.dm_db_missing_index_details sys.dm_db_missing_index_groups sys.dm_db_missing_index_group_stats sys.dm_db_missing_index_columns SQL Operating system sys.dm_os_schedulers -- information abt processors sys.dm_os_sys_info -- info abt computer and abt resources available to and consumed by sql server sys.dm_os_sys_memory -- how memory is used overall on the server, and how much memory is available. sys.dm_os_wait_stats -- info abt all waits DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR) sys.dm_os_buffer_descriptors -- info abt all data pages that are currently in the sql server buffer pool

Top 12 Features of SQL Server 2012

Top 12 Features of SQL Server 2012 1. AlwaysOn Availability Groups -- This feature takes database mirroring to a whole new level. With AlwaysOn, users will be able to fail over multiple databases in groups instead of individually. Also, secondary copies will be readable, and can be used for database backups. The big win is that your DR environment no longer needs to sit idle. 2. Windows Server Core Support -- If you don't know what Windows Server Core is, you may want to come up to speed before Windows 8 (MS is making a push back to the command line for server products). Core is the GUI-less version of Windows that uses DOS and PowerShell for user interaction. It has a much lower footprint (50% less memory and disk space utilization), requires fewer patches, and is more secure than the full install. Starting with SQL 2012, it is supported for SQL Server. 3. Columnstore Indexes -- This a cool new feature that is completely unique to SQL Server. They are special type of read-only index designed to be use with Data Warehouse queries. Basically, data is grouped and stored in a flat, compressed column index, greatly reducing I/O and memory utilization on large queries. 4. User-Defined Server Roles -- DBAs have always had the ability to create custom database role, but never server wide. For example, if the DBA wanted to give a development team read/write access to every database on a shared server, traditionally the only ways to do it were either manually, or using undocumented procedures. Neither of which were good solutions. Now, the DBA can create a role, which has read/write access on every DB on the server, or any other custom server wide role. 5. Enhanced Auditing Features -- Audit is now available in all editions of SQL Server. Additionally, users can define custom audit specifications to write custom events into the audit log. New filtering features give greater flexibility in choosing which events to write to the log. 6. BI Semantic Model -- This is replacing the Analysis Services Unified Dimensional Model (or cubes most people referred to them). It's a hybrid model that allows one data model will support all BI experiences in SQL Server. Additionally, this will allow for some really neat text infographics 7. Sequence Objects -- For those folks who have worked with Oracle, this has been a long requested feature. A sequence is just an object that is a counter -- a good example of it's use would be to increment values in a table, based a trigger. SQL has always had similar functionality with identity columns, but now this is a discrete object. 8. Enhanced PowerShell Support -- Windows and SQL Server admins should definitely start brushing up on their PowerShell scripting skills. Microsoft is driving a lot of development effort into instrumenting all of their server-based products with PowerShell. SQL 2008 gave DBAs some exposure to it, but there are many more in cmdlets in SQL 2012. 9. Distributed Replay -- Once again this is answer to a feature that Oracle released (Real Application Testing). However, and in my opinion where the real value proposition of SQL Server is, in Oracle it is a (very expensive) cost option to Enterprise Edition. With SQL, when you buy your licenses for Enterprise Edition, you get everything. Distributed replay allows you to capture a workload on a production server, and replay it on another machine. This way changes in underlying schemas, support packs, or hardware changes can be tested under production conditions. 10. PowerView -- You may have heard of this under the name "Project Crescent" it is a fairly powerful self-service BI toolkit that allows users to create mash ups of BI reports from all over the Enterprise. 11. SQL Azure Enhancements -- These don't really go directly with the release of SQL 2012, but Microsoft is making some key enhancements to SQL Azure. Reporting Services for Azure will be available, along with backup to the Windows Azure data store, which is a huge enhancement. The maximum size of an Azure database is now up to 150G. Also Azure data sync allows a better hybrid model of cloud and on-premise solutions 12. Big Data Support -- I saved the biggest for last, introduced at the PASS (Professional Association for SQL Server) conference last year, Microsoft announced a partnership with Hadoop provider Cloudera. One part of this involves MS releasing a ODBC driver for SQL Server that will run on a Linux platform. Additionally, Microsoft is building connectors for Hadoop, which is an extremely popular NoSQL platform. With this announcement, Microsoft has made a clear move into this very rapidly growing space. SQL 2012 is a big step forward for Microsoft -- the company is positioning itself to be a leader in availability and in the growing area of big data. As a database professional, I look forward to using SQL 2012 to bring new solutions to my clients

Monday, November 19, 2012

TempDB Best Practices

What is TempDB responsible for in SQL Server? Global (##temp) or local (#temp) temporary tables, temporary table indexes, temporary stored procedures, table variables, tables returned in table-valued functions or cursors. Database Engine objects to complete a query such as work tables to store intermediate results for spools or sorting from particular GROUP BY, ORDER BY, or UNION queries. Row versioning values for online index processes, Multiple Active Result Sets (MARS) sessions, AFTER triggers and index operations (SORT_IN_TEMPDB). DBCC CHECKDB work tables. Large object (varchar(max), nvarchar(max), varbinary(max) text, ntext, image, xml) data type variables and parameters. What are some of the best practices for TempDB? Do not change collation from the SQL Server instance collation. Do not change the database owner from sa. Do not drop the TempDB database. Do not drop the guest user from the database. Do not change the recovery model from SIMPLE. Ensure the disk drives TempDB resides on have RAID protection i.e. 1, 1 + 0 or 5 in order to prevent a single disk failure from shutting down SQL Server. Keep in mind that if TempDB is not available then SQL Server cannot operate. If SQL Server system databases are installed on the system partition, at a minimum move the TempDB database from the system partition to another set of disks. Size the TempDB database appropriately. For example, if you use the SORT_IN_TEMPDB option when you rebuild indexes, be sure to have sufficient free space in TempDB to store sorting operations. In addition, if you are running into insufficient space errors in TempDB, be sure to determine the culprit and either expand TempDB or re-code the offending process.

Friday, October 12, 2012

SQL Query to list stored procedure inner objects

SQL Server Query to list stored procedure inner objects:

Please mention Actual procedure/Funtion/View Name in Place of 'StoredProcedureName'

SELECT Object_name(referencing_id) ParentObject,
referenced_entity_name InnerObject,
type_desc,
create_date,
modify_date
FROM sys.sql_expression_dependencies d
INNER JOIN sys.objects o
ON d.referenced_entity_name = o.name
WHERE Object_name(referencing_id) = 'StoredProcedureName'

Thanks,
Dinesh Babu

SQL Query to get the lines of Code of stored procedure in MS Sql Server

Query to get the lines of code of sql objects in ms sql server
Please Enter object Name in place of 'StoredProcedureName'

SELECT t.sp_name,
Sum(t.lines_of_code) - 1 AS lines_ofcode,
t.type_desc
FROM (SELECT o.name AS sp_name,
( Len(c.text) - Len(Replace(c.text, Char(10), '')) ) AS
lines_of_code,
CASE
WHEN o.xtype = 'P' THEN 'Stored Procedure'
WHEN o.xtype IN ( 'FN', 'IF', 'TF' ) THEN 'Function'
END AS type_desc
FROM sysobjects o
INNER JOIN syscomments c
ON c.id = o.id
WHERE o.xtype IN ( 'P', 'FN', 'IF', 'TF' )
AND o.category = 0
AND o.name IN ( 'StoredProcedureName' )) t
GROUP BY t.sp_name,
t.type_desc
ORDER BY 1

Thanks,
Dinesh Babu

Tuesday, October 2, 2012

What is Algebrizer

What is Algebrizer(in SQL 2000 Normalizer in 2005/8 Algebrizer) Algebrizer is a process in query execution, which starts its work after Parser. Once the parser finds query syntax correct, it passes it to the algebrizer. Here start work of algebrizer: • Algebrizer is responsible for objects and columns names (which you have provided in query or which are being referenced by your query) verification. For example if wrong name for a table is typed in a query, then its algebrizer’s duty to verify and generate an error. • Algebrizer also identifies all data types which are being processed for a given query. • Algebrizer verify that GROUP BY and aggregate columns are placed in right place or not. For example if you write following query and just Ctrl+F5 (just to parse). No error will be generated. But when you will press F5 (execute it). After Parsing, algebrizer will work and return an error. sys.dm_exec_query_optimizer_info Is DMV which provide the query optimizer information A. Viewing statistics on optimizer execution What are the current optimizer execution statistics for this instance of SQL Server? SELECT * FROM sys.dm_exec_query_optimizer_info; B. Viewing the total number of optimizations How many optimizations are performed? SELECT occurrence AS Optimizations FROM sys.dm_exec_query_optimizer_info WHERE counter = 'optimizations'; C. Average elapsed time per optimization What is the average elapsed time per optimization? SELECT ISNULL(value,0.0) AS ElapsedTimePerOptimization FROM sys.dm_exec_query_optimizer_info WHERE counter = 'elapsed time'; D. Fraction of optimizations that involve subqueries What fraction of optimized queries contained a subquery? SELECT (SELECT CAST (occurrence AS float) FROM sys.dm_exec_query_optimizer_info WHERE counter = 'contains subquery') / (SELECT CAST (occurrence AS float) FROM sys.dm_exec_query_optimizer_info WHERE counter = 'optimizations') AS ContainsSubqueryFraction;

Wednesday, March 14, 2012

Logical Reads in SQL Server 2008

Logical Reads count can be get by using follwoing ways
 Below are the ways to check logical Reads:
 1. set statistics io on
 2. sys.dm_exec_query_Stats by executing the below statement we can find detailed info about reads/writes select * from sys.dm_exec_query_Stats
 3. SQL Profiler: by executing the sql profiler on that database we can find out logical reads.. there are some other SQL DMV/F which also help us to get logical reads...

Summary Info: 
Logical Reads : Reading Data pages from Cache
Physical Reads : Reading Data pages from Hard Disk
Buffer Cach Hit Ratio: logical reads – physical reads)/logical read * 100%
note: Buffer cach hit ratio can be easily found using performance moniter...

Detailed Info: Logical Reads: Logical read indicates total number of data pages needed to be accessed from data cache to process query.
It is very possible that logical read will access same data pages many times, so count of logical read value may be higher than actual number of pages in a table. Usually the best way to reduce logical read is to apply correct index or to rewrite the query.

 Physical Reads Physical read indicates total number of data pages that are read from disk. In case no data in data cache, the physical read will be equal to number of logical read. And usually it happens for first query request. And for subsequent same query request the number will be substantially decreased because the data pages have been in data cache.

Buffer Cash Hit Ratio Buffer hit ratio will be calculated based on these two kinds of read as the following formula: (logical reads – physical reads)/logical read * 100%.
The high buffer hit ratio (if possible to near 100%) indicates good database performance on SQL Server level.
So use information from physical read and buffer hit ratio to measure performance in server level and logical read to measure individual query level Execess of the Logical Reads tends high memory Usage
There are some ways by which we can Reduce Logical Reads:
1. Improper/Useless/Insufficient Indexes: Indexes should be build on the basis of data access or retrieval process if any of the indexes is build on the columns which are not used in a query will leads to High Logical reads and will degrade the performance while reads and writing the data....
2.Poor Fill Factor/Page Density: Page use should should not be very less. otherwise large number of page will be used for small amount of data which will also leads to High Logical Reads....
 3.Wide Indexes: Indexing on the large number of columns will leads to high logical reads....
 4. Index scanning: if query is leads to index scanning on the table then logical reads will be high...

Deprecated Features in SQL Server 2012 Denali

Deprecated Features in SQL Server 2012 Denali


1. Old style join are no more supported like *= and =*

2. Compute and Compute by are no more supported

3. Added Restriction in Usage of Raiserror

RAISERROR in the format RAISERROR integer 'string' is discontinued

4. Fastfirstrow query hint is no more supported

5. Databaseproperty function is no more supported

6. 32-bit Address Windowing Extensions (AWE) and 32-bit Hot Add memory support are discontinued

7. SQL Mail is no more supported

8. sp_dboption is no more supported

9. with append clause on trigger is no more supported

10. I. sp_configure 'user instance timeout'

II. sp_configure 'user instances enabled'

11. 80 Compatibility level is not supported

12. 1. RESTORE DATABASE WITH DBO_ONLY

2. RESTORE LOG WITH DBO_ONLY

13. The Active Directory Helper service and the related components has been removed

sp_ActiveDirectory_Obj

sp_ActiveDirectory_SCP

sp_ActiveDirectory_Start



14. SQL Server 2012 Denali does not support SQL Server Distributed Management Objects (SQL-DMO) any more. SQL Server Management Objects (SMO) is the recommended alternative

15. SQL Server 2012 (Denali) no more supports the following backup commands.

BACKUP DATABASE WITH PASSWORD

BACKUP LOG WITH PASSWORD

BACKUP DATABASE WITH MEDIAPASSWORD

BACKUP LOG WITH MEDIAPASSWORD

16. SQL Server 2012 Denali does not support VIA protocol any more

Use TCP instead of VIA protocol

17. The ability for users to create new remote servers by using sp_addserver is discontinued. sp_addserver with the 'local' option remains available. Remote servers preserved during upgrade or created by replication can be used

18. sp_dropalias

19. The version parameter of PWDCOMPARE representing a value from a login earlier than SQL Server 2000 is discontinued

20. System table sys.database_principal_aliases is no more available

21. Xevent

databases_data_file_size_changed, databases_log_file_size_changed

eventdatabases_log_file_used_size_changed

locks_lock_timeouts_greater_than_0

locks_lock_timeouts

22. Set Options SET DISABLE_DEF_CNST_CHK is no more available

23. Data types text ,ntext,Image are no more avaiableData types



Data Transformation Services (DTS)

Support for migrating or running Data Transformation Services (DTS) packages has been discontinued in this release. The following DTS functionality has been discontinued.

• DTS runtime

• DTS API

• Package Migration Wizard for migrating DTS packages to the next version of Integration Services

• Support for DTS package maintenance in SQL Server Management Studio

• Execute DTS 2000 Package task

• Upgrade Advisor scan of DTS packages.

Execute DTS 2000 Package Task

The Execute DTS 2000 Package task has been discontinued.



To know the details about depreciated features in SQL Server 2012 Denli

Check below link:

http://msdn.microsoft.com/en-us/library/ms143729(v=sql.110).aspx

Tuesday, March 13, 2012

DeadLocks in SQL Server

Dead Lock: when two transactions are holding resources and both of them require before completion.
Once the dead lock situation is detected lock manager make one of the transactions as victim for rollback and raise error 1205 which is captured/handled properly

we can check Dead Lock , Blocking and Locking processes using bleow:
1. sp_who
2. sp_who2
3. select * from sys.sysprocesses
4. select  *  from      Sys.dm_tran_locks
5. Using activity montor(in sql server 2008 only)
6. Using  SQL Server Profiler(using template TSQL_Locks)
7. sp_lock
8. select * from master.dbo.syslockinfo

Locks and Dead Lock in SQL Server

Locks and Dead Lock in SQL Server 2008:


SQL Server uses locking and blocking mechanism to maintain data consistency in multiuser environment. Lock Manager is the SQL Server Internal governing body to handle locking scenarios like:

1. Determine Appropriate lock to be acquire/Release on resource

2. How much time will retain the lock

3. Decide when processes are allowed to modify data so that reads should be consistent

4. I. When SQL Server started with lock set to 0,lock manager allocate 2 percent of the memory allocated for SQL Server for an initial pool of lock structure(SQL 2000)

II. When SQL Server started with lock set to 0,lock manager allocate appropriate percent of the memory allocated for SQL Server for an initial pool of l 2500 lock structure(SQL 2005) as soon as lock pool is exhausted lock manager request for additional memory

III. The dynamic lock pool does not acquire more than 60 percent of the memory allocated to the Database Engine. After the lock pool has reached 60 percent of the memory acquired by an instance of the Database Engine, or no more memory is available on the computer, further requests for locks generate an error

5. Resolve Dead locks

6. compatibility between locks



SQL Server has seven different locking modes and three different lock types:

Locking modes are:

1. Shared locks(S): placed on resource for read operation and compatible for other shared and update lock except exclusive lock. When repeatable read or locking hint is used then shared lock is retained till duration of the transaction otherwise shared lock released as soon as read is completed

Equation: S on S = OK, S on U = OK, S on X = WAIT



2. Update Lock(U): placed on resource where shared lock is required and need to upgrade to exclusive lock. one transaction can acquire update lock on resource once the modification is required on resource update lock upgrade to exclusive lock

Equation: U on S = OK, U on U = WAIT, U on X = WAIT



3. Exclusive Lock(X): placed on resource for data modification and not compatible with any other type of lock .only nolock hint and read uncommitted isolation level overrides exclusive lock functionality.

Equation: X on S = WAIT, X on U = WAIT, X on X = WAIT



4. Intent(IS,IX,SIX):placed on resource to improve performance and locking efficiency by placing at high level i.e. table level



5. Schema(Sch-M,Sch-S):Sch-M is placed on resource while schema modification like while adding column to a table and Sch-S are placed on object/resource while query are being compiled or executed.

Equation: Sch-M Block all other operations until lock is released

Sch-S Locks are not compatible with Sch-M Locks







6. Bulk Update(BU):Placed on resource for Bulk insert and update of the data, these locks allow multiple bulk insert threads to access table but does not allow other processes to access the table,BU locks is enable by using TABLOCK hint or by using sp_tableoption for table lock for bulk load table option.



7. Key Range: Placed on the range of rows to protect against phantom insertion/deletion in a record set accessed by transaction, these locks are used by transaction using SERILIZABLE transaction isolation level.





These three different lock mode can be acquire at Row, Page, Table level and scope

Up to session, transaction, and cursor.



Dead Lock: when two transactions are holding resources and both of them require before completion.

Once the dead lock situation is detected lock manager make one of the transactions as victim for rollback and raise error 1205 which is captured/handled properly.

Wednesday, January 11, 2012

SQL Server Bottleneck Analysis Guidelines

                           SQL Server Bottleneck Analysis Guidelines

Memory Bottleneck Analysis 2

Disk Bottleneck Analysis 4

Processor Bottleneck Analysis 6

Overall SQL Server Bottleneck Analysis 8

Transaction Management 9

Key performance counters 10



Memory Bottleneck Analysis



Object: - Memory

Counter: - Available Mbytes

Preferred Value: - > 20MB

Reference: - KB 889654



Object: - Memory

Counter: - Free System Page Table Entries

Preferred Value: - > 7000

Description: - Free System Page Table Entries is the number of page table entries not currently in use by the system. If < 7000, consider removing /3GB.

Reference: - KB 311901



Object: - Memory

Counter: - Pages/Sec

Preferred Value: - < 50

Description: - Pages/sec is the rate at which pages are read from or written to disk to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays.

Reference: - Monitoring and Tuning Your Server



Object: - Memory

Counter: - Pages Input/Sec

Preferred Value: - < 10

Description: - Pages Input/sec is the rate at which pages are read from disk to resolve hard page faults.

Reference: - KB 889654



Object: - Paging File

Counter: - %Usage

Preferred Value: - < 70%

Description: - The amount of the Page File instance in use in percent.

Reference: - KB 889654



Object: - SQLServer:Buffer Manager

Counter: - Page Life Expectancy

Preferred Value: - > 300

Description: - This performance monitor counter tells you, on average, how long data pages are staying in the buffer. If this value gets below 300 seconds, this is a potential indication that your SQL Server could use more memory in order to boost performance.



Object: - SQLServer:Buffer Manager

Counter: - Lazy Writes/Sec

Preferred Value: - < 20

Description: - This counter tracks how many times a second that the Lazy Writer process is moving dirty pages from the buffer to disk in order to free up buffer space. Generally speaking, this should not be a high value, say more than 20 per second or so. Ideally, it should be close to zero. If it is zero, this indicates that your SQL Server's buffer cache is plenty big and SQL Server doesn't have to free up dirty pages, instead waiting for this to occur during regular checkpoints. If this value is high, then a need for more memory is indicated.



Object: - SQLServer:Buffer Manager

Counter: - Page reads/sec

Preferred Value: - < 90

Description: - Number of physical database page reads issued. 80 – 90 per second is normal, anything that is above indicates indexing or memory constraint.



Object: - SQLServer:Buffer Manager

Counter: - Checkpoint Pages/Sec

Preferred Value: - This value is relative, it varies from server to server, we need to compare the average to a base line capture to tell if the value is high or low.

Description: - When a checkpoint occurs, all dirty pages are written to disk. This is a normal procedure and will cause this counter to rise during the checkpoint process. What you don't want to see is a high value for this counter over time. This can indicate that the checkpoint process is running more often than it should, which can use up valuable server resources. If this has a high figure (and this will vary from server to server), consider adding more RAM to reduce how often the checkpoint occurs, or consider increasing the "recovery interval" SQL Server configuration setting.



Object: - SQLServer:Buffer Manager

Counter: - Page writes/sec

Preferred Value: - < 90

Description: - Number of physical database page writes issued. 80 – 90 per second is normal, anything more we need to check the lazy writer/sec and checkpoint counters, if these counters are also relatively high then, it’s memory constraint.



Object: - SQLServer:Buffer Manager

Counter: - Free pages

Preferred Value: - > 640

Description: - Total number of pages on all free lists.



Object: - SQLServer:Buffer Manager

Counter: - Stolen pages

Preferred Value: - Varies. Compare with baseline

Description: - Number of pages used for miscellaneous server purposes (including procedure cache).



Object: - SQLServer:Buffer Manager

Counter: - Buffer Cache hit ratio

Preferred Value: - > 90%

Description: - Percentage of pages that were found in the buffer pool without having to incur a read from disk.



Object: - SQLServer:Buffer Manager

Counter: - Target Server Memory(KB)

Preferred Value: -

Description: - Total amount of dynamic memory the server can consume.



Object: - SQLServer:Buffer Manager

Counter: - Total Server Memory(KB)

Preferred Value: -

Description: - Total amount of dynamic memory (in kilobytes) that the server is using currently



Disk Bottleneck Analysis



Object: - PhysicalDisk

Counter: - Avg. Disk Sec/Read

Preferred Value: - < 8ms

Description: - Measure of disk latgency. Avg. Disk sec/Read is the average time, in seconds, of a read of data from the disk.

More Info:

Reads or non cached Writes

Excellent < 08 Msec ( .008 seconds )

Good < 12 Msec ( .012 seconds )

Fair < 20 Msec ( .020 seconds )

Poor > 20 Msec ( .020 seconds )

Cached Writes Only

Excellent < 01 Msec ( .001 seconds )

Good < 02 Msec ( .002 seconds )

Fair < 04 Msec ( .004 seconds )

Poor > 04 Msec ( .004 seconds



Object: - PhysicalDisk

Counter: - Avg. Disk sec/Write

Preferred Value: - < 8ms (non cached) < 1ms (cached)

Description: - Measure of disk latency. Avg. Disk sec/Write is the average time, in seconds, of a write of data to the disk.



Object: - PhysicalDisk

Counter: - Avg. Disk Read Queue Length

Preferred Value: - < 2 * spindles

Description: - Avg. Disk Read Queue Length is the average number of read requests that were queued for the selected disk during the sample interval.

More Info:

< (2+ no of spindles) Excellent

< (2*no of spindles) Good

< (3* no of spindles) Fair

Reference - Whitepaper “Performance Monitoring in Windows 2003: Best Practices” by Ben W. Christenbury



Note: If the disk has say 20 disk and it is RAID 10 then no. of spindles = 20/2 = 10. If it is RAID 5 then the no. of spindles = no of disks = 20.



Object: - PhysicalDisk

Counter: - Avg. Disk Write Queue Length

Preferred Value: - < 2 * spindles

Description: - Avg. Disk Write Queue Length is the average number of write requests that were queued for the selected disk during the sample interval.





Object: - SQLServer:Buffer Manager

Counter: - Page reads/sec

Preferred Value: - < 90

Description: - Number of physical database page reads issued. 80 – 90 per second is normal, anything that is above indicates indexing or memory constraint.



Object: - SQLServer:Buffer Manager

Counter: - Page writes/sec

Preferred Value: - < 90

Description: - Number of physical database page writes issued. 80 – 90 per second is normal, anything more we need to check the lazy writer/sec and checkpoint counters, if these counters are also relatively high then, it’s memory constraint.



Object: - SQLServer:Buffer Manager

Counter: - Free pages

Preferred Value: - > 640

Description: - Total number of pages on all free lists.



Object: - SQLServer:Buffer Manager

Counter: - Stolen pages

Preferred Value: - Varies. Compare with baseline

Description: - Number of pages used for miscellaneous server purposes (including procedure cache).



Object: - SQLServer:Buffer Manager

Counter: - Buffer Cache hit ratio

Preferred Value: - > 90%

Description: - Percentage of pages that were found in the buffer pool without having to incur a read from disk.



Processor Bottleneck Analysis



Object: - Processor

Counter: - %Processor Time

Preferred Value: - < 80%

Description: - % Processor Time is the percentage of elapsed time that the processor spends to execute a non-Idle thread.



Object: - Processor

Counter: - %Privileged Time

Preferred Value: - < 30% of Total %Processor Time

Description: - % Privileged Time is the percentage of elapsed time that the process threads spent executing code in privileged mode.



Object: - Process (sqlservr)

Counter: - %Processor Time

Preferred Value: - < 80%



Object: - Process (sqlservr)

Counter: - %Privileged Time

Preferred Value: - < 30% of %Processor Time (sqlservr)

Description: - Note: Divide the value by number of processors



Object: - System

Counter: - Context Switches/sec

Preferred Value: - < 3000

Description: -

1500 – 3000 per processor Excellent – Fair

> 6000 per processor Poor

Upper limit is about 40,000 at 90 % CPU per CPU

NOTE: Remember to divide by number of processors



Object: - System

Counter: - Processor Queue Length

Preferred Value: - < 4 per CPU

Description: - For standard servers with long Quantums

<= 4 per CPU Excellent

< 8 per CPU Good

< 12 per CPU Fair



Object: - SQLServer:Access Methods

Counter: - Full Scans / sec

Preferred Value: - < 1

Description: - If we see high CPU then we need to investigate this counter, otherwise if the full scan are on small tables we can ignore this counter. Values greater than 1 or 2 indicate that we are having table / Index page scans. We need to analyze how this can be avoided.



Object: - SQLServer:Access Methods

Counter: - Worktables Created/Sec

Preferred Value: - < 20

Description: - Number of worktables created in tempdb per second. Worktables are used for queries that use various spools (table spool, index spool, etc).





Object: - SQLServer:Access Methods

Counter: - Workfiles Created/Sec

Preferred Value: - < 20

Description: - Number of work files created per second. Tempdb workfiles are used in processing hash operations when the amount of data being processed is too big to fit into the available memory. They may be able to reduce this number by making the queries more efficient by adding/changing indexes, adding additional memory, etc.



Object: - SQLServer:Access Methods

Counter: - Page Splits/sec

Preferred Value: - < 20

Description: - Interesting counter that can lead us to our table / index design. This value needs to be low as possible. If you find out that the number of page splits is high, consider increasing the fillfactor of your indexes. An increased fillfactor helps to reduce page splits because there is more room in data pages before it fills up and a page split has to occur.





Overall SQL Server Bottleneck Analysis



Object: - SQLServer:General Statistics

Counter: - User Connections

Preferred Value: -

Description: - The number of users currently connected to the SQL Server.



Object: - SQLServer:General Statistics

Counter: - Logins/sec

Preferred Value: - < 2

Description: - > 2 per second indicates that the application is not correctly using connection pooling.



Object: - SQLServer:General Statistics

Counter: - Logouts/sec

Preferred Value: - < 2

Description: - > 2 per second indicates that the application is not correctly using connection pooling.



Object: - SQLServer:SQL Statistics

Counter: - Batch Requests/Sec

Preferred Value: - < 1000

Description: - Over 1000 batch requests per second indicate a very busy SQL Server.



Object: - SQLServer:SQL Statistics

Counter: - SQL Compilations/sec

Preferred Value: - < 10% of the number of Batch Requests / sec

Description: - The number of times per second that SQL Server compilations have occurred. This value needs to be as low as possible. If you see a high value such as over 100, then it’s an indication that there are lots or adhoc queries that are running, might cause CPU



Object: - SQLServer:SQL Statistics

Counter: - SQL Re-Compilations/sec

Preferred Value: - < 10% of the number of SQL Compilations/sec

Description: - This needs to be nil in our system as much as possible. A recompile can cause deadlocks and compile locks that are not compatible with any locking type.



Object: - SQLServer:Latches

Counter: - Average Latch Wait Time (ms)

Preferred Value: - < 300

Description: - Average latch wait time (milliseconds) for latch requests that had to wait.



Transaction Management



Object: - SQLServer:Locks

Counter: - Number of Deadlocks/sec

Preferred Value: - < 1

Description: - The number of lock requests that resulted in a deadlock.



Object: - SQLServer:Locks

Counter: - Lock Requests/sec

Preferred Value: - < 1000

Description: - Number of requests for a type of lock per second. Lock requests/sec > 1000 indicates that the queries are accessing large number of rows; the next step is to review high read queries. If you also see high Avg. Wait time, then it’s an indication of blocking, then review the blocking script output.



Object: - SQLServer:Locks

Counter: - Average Wait Time (ms)

Preferred Value: - < 500

Description: - This is the average wait time in milliseconds to acquire a lock. Lower the value the better it is. If the value goes higher then 500, there may be blocking going on; we need to run blocker script to identify blocking.



Key performance counters



Buffer cache hit ratio



The buffer cache hit ratio is the percentage of pages found in the buffer cache without having to read from disk. A value of 90 percent or higher is desirable. If the value is 60 percent, you should add more memory until the value is consistently greater than 90 percent.



Page life expectancy



Have you ever checked out the page life expectancy counter in Performance Monitor's Buffer Manager object? SQL Server Books Online (BOL) says the page life expectancy value is the "number of seconds a page will stay in the buffer pool without references." So, a buffer that has a 300-second page life expectancy will keep any given page in memory in the buffer pool for 5 minutes before the buffer pool flushes the page to disk—unless a process references the page.

This counter can be helpful in determining whether you have a memory problem, giving you a reasonably accurate view of whether your server has memory pressure. According to Microsoft, 300 seconds is the minimum target for page life expectancy. If the buffer pool flushes your pages in less than 300 seconds, you probably have a memory problem. Looking at this value is particularly handy when your page life expectancy is significantly higher or lower than 300 seconds. For example, a customer recently asked me, "Do we need more memory?" I monitored the page life expectancy value for the customer's system, and the value didn't fall below 3000 seconds. That's quantifiable proof that more memory wouldn't help performance. Other customers have an average page life expectancy between 10 and 50 seconds, and they wonder why their servers are slow.