So, if to go short, here are some things that can be done:
Lack of indexes can have a great impact on overall performance. Here are some scripts that will help here.
This script will list all potentially useful indexes:
select d.*
, s.avg_total_user_cost
, s.avg_user_impact
, s.last_user_seek
,s.unique_compiles
from sys.dm_db_missing_index_group_stats s
,sys.dm_db_missing_index_groups g
,sys.dm_db_missing_index_details d
where s.group_handle = g.index_group_handle
and d.index_handle = g.index_handle
order by s.avg_user_impact desc
And this one will create for you the statements for the indexes, so all you will have to do is to execute them:
use YOUR_DB_NAME_HERE
SELECT 'CREATE NONCLUSTERED INDEX IX1_' + object_name(c.object_id) + left(cast(newid() as varchar(500)),5) + char(10)
+ ' on ' + object_name(c.object_id)
+ '('
+ case when c.equality_columns is not null and c.inequality_columns is not null then c.equality_columns + ',' + c.inequality_columns
when c.equality_columns is not null and c.inequality_columns is null then c.equality_columns
when c.inequality_columns is not null then c.inequality_columns
end
+ ')' + char(10)
+ case when c.included_columns is not null then 'Include (' + c.included_columns + ')'
else '
end as includes
FROM sys.dm_db_missing_index_group_stats a
inner join sys.dm_db_missing_index_groups b
on a.group_handle = b.index_group_handle
inner join sys.dm_db_missing_index_details c
on c.index_handle = b.index_handle
where db_name(database_id) = 'YOUR_DB_NAME_HERE'
and equality_columns is not null
ORDER BY a.avg_total_user_cost * a.avg_user_impact * (a.user_seeks + a.user_scans)DESC
they may cause increased CPU consumption
select
highest_cpu_queries.plan_handle,
highest_cpu_queries.total_worker_time,
q.dbid,
q.objectid,
q.number,
q.encrypted,
q.[text]
from
(select top 50
qs.plan_handle,
qs.total_worker_time
from
sys.dm_exec_query_stats qs
order by qs.total_worker_time desc) as highest_cpu_queries
cross apply sys.dm_exec_sql_text(plan_handle) as q
order by highest_cpu_queries.total_worker_time desc
Having all kind of indexes is cool of course, but too many indexes can have bad impact of performance too, so this query will list all indexes which haven't been used from the last sql server restart:
select object_name(i.object_id),
i.name,
s.user_updates,
s.user_seeks,
s.user_scans,
s.user_lookups
from sys.indexes i
left join sys.dm_db_index_usage_stats s
on s.object_id = i.object_id and
i.index_id = s.index_id and s.database_id = <dbid>>
where objectproperty(i.object_id, 'IsIndexable') = 1 and
-- index_usage_stats has no reference to this index (not being used)
s.index_id is null or
-- index is being updated, but not used by seeks/scans/lookups
(s.user_updates > 0 and s.user_seeks = 0
and s.user_scans = 0 and s.user_lookups = 0)
order by object_name(i.object_id) asc
this script will show top 50 statements by I/O:
SELECT TOP 50
(qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count as [Avg IO],
substring (qt.text,qs.statement_start_offset/2,
(case when qs.statement_end_offset = -1
then len(convert(nvarchar(max), qt.text)) * 2
else qs.statement_end_offset end - qs.statement_start_offset)/2)
as query_text,
qt.dbid,
qt.objectid
FROM sys.dm_exec_query_stats qs
cross apply sys.dm_exec_sql_text (qs.sql_handle) as qt
ORDER BY [Avg IO] DESC
The advantage of plan reuse means you will not incur the CPU cost of optimization for each execution of the same plan. The statements with the lowest plan reuse can be found as follows:
SELECT TOP 50
qs.sql_handle
,qs.plan_handle
,cp.cacheobjtype
,cp.usecounts
,cp.size_in_bytes
,qs.statement_start_offset
,qs.statement_end_offset
,qt.dbid
,qt.objectid
,qt.text
,SUBSTRING(qt.text,qs.statement_start_offset/2,
(case when qs.statement_end_offset = -1
then len(convert(nvarchar(max), qt.text)) * 2
else qs.statement_end_offset end -qs.statement_start_offset)/2)
as statement
FROM sys.dm_exec_query_stats qs
cross apply sys.dm_exec_sql_text(qs.sql_handle) as qt
inner join sys.dm_exec_cached_plans as cp on qs.plan_handle=cp.plan_handle
where cp.plan_handle=qs.plan_handle
and qt.dbid = db_id() ----- put the database ID here
ORDER BY [Usecounts] ASC
AH!!! And don't forget that time to time you will need to do index rebuilt and the defragmentation. You will be surprised how fast your DB will work after that.
If all this does not help, you will have to check for bottlenecks, that can slow down your overall system throughput and degrade performance: disk, memory, processor and network. You may want to read this and this articles to get more info about it.
P.S. Note that some or all of the provided scripts may run only on MS SQL 2005.
You may also be interested in one of this:
- SQL Server 2008 Query Performance Tuning Distilled (Expert's Voice in SQL Server)
- Inside Microsoft® SQL Server(TM) 2005: Query Tuning and Optimization
- SQL Server Query Performance Tuning Distilled, Second Edition
- SQL Performance Tuning
- SQL Server 2008 Query Performance Tuning Distilled
- Professional SQL Server 2005 Performance Tuning (Programmer to Programmer)
The same is possible in SQL 2000 but there querys are totally different. Would be nice as a possible extension to use something like, if @ver = 9 (query) end / if @ver = 8 (query) end.