1. Move your TSQL codes from application into the DB Server
    While i don't think ALL your sql codes should be moved to the DB Server (i think no reason simple scripts to be moved), i do believe that anything that may create performance issues or have to operate with more or less big amount of data should be on the DB Server. Different products are good for different things and as so should do the work they were created for. This is one of the reasons personaly i didn't get TOO excited about ColdFusion + ORM. At least not yet. Maybe this will change over time, but for right now i don't feel too comfortable about all this ORM thing. I believe if/when you will have to optimize the data access performance you wil find yourself in the need to use more Stored Procedures/Views/Functions/Triggers.
    While moving your TSQL codes to the DB Server most probably will not result in an immediate performance boost as in the case of indexes discussed in the previous post, it still will have a lot of benefits in long-run plus it will enable you to perform other optimization steps and apply other techniques easily to further optimize your data access routines. I don't say that you can't optimize while using ORM, i am sure you can, but i don't think anyone can not agree that while having your data access routines using TSQL objects in your database, you have the maximum opportunity to optimize the database.
    Again - each product (ColdFusion, MS SQL, etc) should do the work he is best match for.


  2. Identify inefficient TSQLs, re-factor and apply best practices
    It doesn't matter how good indexes you created, if you have poorly written data retrival/access logic, sooner or latter you will get slow performance. Often we don't see the issues right away but only when there is a critical mass in records number, database calls, etc. Here come to help TSQL Best Practices. Many of us know them (or most of them), but the key is to really USE them and not just know. So here are some of them.
    • Don't use SELECT * in SQL Query
      Unnecessary columns not only adds expense to the data retrieval time, but also the Database engine cannot utilize Covered Indexes and so query performs slowly.
    • Avoid unnecessary columns in SELECT list and unnecessary tables in JOIN conditions
      Selecting unnecessary columns in select query adds overhead to the actual query. Including unnecessary tables in the join conditions forces the database engine to retrieve and fetch unnecessary data and so increase the query execution time.
    • Do not use the COUNT() aggregate in a subquery to do an existence check
      When you use COUNT(), SQL Server does not know that you are doing an existence check. It counts all matching values. When you use EXISTS, SQL Server knows you are doing an existence check. When it finds the first matching value, it returns TRUE and stops looking. The same applies to using COUNT() instead of IN or ANY.
      So, instead of doing
      SELECT yourColumnList
      FROM yourTable
      WHERE 0 < (
               SELECT Count( * ) FROM yourTable2 WHERE ..
            )
      ,
      do
      SELECT yourColumnList
      FROM yourTable
      WHERE EXISTS (
                  SELECT *
                  FROM yourTable2
                  WHERE ...
               )
    • Do your best to avoid joining between two types of columns
    • Avoid deadlocks
      Always access tables in the same order in all your Stored Procedures and Triggers. Keep your transactions as short as possible. During a transaction access only the data you really need. The less data you touch, the better.
    • Avoid using Cursors and UDFs
    • If you don't need always a 100% perfect result, try not to use COUNT( * ) to obtain the record count in the table
      Instead of doing
      SELECT COUNT( * )
      FROM dbo.orders

      you may do
      SELECT rows
         FROM sysindexes
         WHERE id = OBJECT_ID('dbo.Orders') AND indid < 2
    • Use Table variable and Temp table as far as possible
      Use Table Variable or Temp Table for intermediate storage of records in a procedure. There are pros and cons between Table Variable and Temp Table, but in general, if the record set is small you should go for Table Variable.
    • Use Full Text Search for searching textual data instead of LIKE search
      For more info on Full Text Search see this.
    • Where possible, use UNION to implement OR operation
      If distinguished result is not required, use UNION ALL instead of UNION as it is faster.
    • Use Stored Procedures
      this way you get: Code reusability, Access Control, Execution plan reusability, Prevent SQL Injection, more Readability and Manageability.
    • Use Fully Qualified Name for objects (i.e. SchemaName.ObjectName)
      When the execution plan is prepared by the query engine, in the binding process, Query engine has to resolve the Object existence. If you specify the fully qualified name the object resolution become easy for the engine.
    • EXISTS vs IN
      IN Operator can easily be replaced by EXISTS which is more optimized for correlated queries. But you may find IN better for small tables.
    • When possible avoid Function in select statement
      because any functions like CONVERT(), CAST, ISNULL in query may ignore the indexes available on the table.
    • Try to avoid Negative operator
      Avoid using < >, NOT IN, NOT EXISTS kind of operator because it causes a table scan. Query engine has to ensure there is no data till the last row is read.
    • Implement a lazy loading strategy for the large objects
      Store the Large Object columns (like VARCHAR(MAX), Image Text etc) in a different table other than the main one and put a reference to the large object in the main table. Retrieve all the main table data in the query and if large object is required to load, retrieve the large object data from the large object table only when this is required.
    • Do not call functions repeatedly within your stored procedures, triggers, functions and batches
      Instead store the result in a variable for later use.
    • Some good practices in Stored Procedure
      Do NOT use SP_XXX as a naming convention as it will cause additional searches and I/O (because the system Stored Procedure names start the same way). Also this increases the possibility of conflicting with an existing system stored procedure.
      Use Set Nocount On to eliminate Extra network trip. Database Engine, return the number of rows effected by the statements to the client which is unnecessary and you can avoid that using this statement.
      Use WITH RECOMPILE clause in the EXECUTE statement (first time) when index structure changes (So that, the compiled version of the SP can take advantage of the newly created indexes).
      Use default parameter values for easy testing.
    • Some good practices in Triggers
      Avoid the use of triggers as firing and executing them is an expensive process.
      Don't use triggers that can be implemented using constraints.
      Don't use same trigger for different triggering events (Insert, Update, Delete)
      Don't use Transactional codes inside a trigger. The trigger always runs within the transactional scope of the code that fired the trigger.
    • Some good practices in Views
      Use views for re-using complex TSQL blocks and to enable it for Indexed views
      Do not use views that retrieve data from a single table only as it will be an unnecessary overhead.
    • Some good practices in Transactions
      Prior to SQL Server 2005, after BEGIN TRANSACTION and each subsequent modification statement, the value of @@ERROR had to be checked. If its value was non-zero then the last statement caused an error and if any error occurred, the transaction had to be rolled back and an error had to be raised. In SQL Server 2005 and onwards the Try..Catch.. block can be used to handle transactions in the TSQLs. So, try to used Try..Catch based transactional codes.
      Try to avoid nested transaction. Use @@TRANCOUNT variable to determine whether Transaction needs to be started
      Start transaction as late as possible and commit/rollback the transaction as fast as possible to reduce the time period of resource locking.
    • Don't go crazy with Normalization
      Sometimes de-normalization can give you better performance at the cost of Data redundancy, so don't blindly Normalize everything.
    • Avoid wildcard characters at the beginning of a word while searching using the LIKE keyword
      As it will result in an index scan, which defeats the purpose of an index.
    • Wherever possible, use Derived Tables Derived Tables perform much better.
      For example:
      SELECT MIN(Salary) 
      FROM Employees
      WHERE EmpID IN
                  (
                     SELECT TOP 2 EmpID
                     FROM Employees
                     ORDER BY Salary Desc
                  )

      can be written as:
      SELECT MIN(Salary) 
      FROM
         (
            SELECT TOP 2 Salary
            FROM Employees
            ORDER BY Salary DESC
         ) AS A

      which will give you same result but will perform twice faster.

    This are only some of the Best Practices, but using them you will get better results.

Till soon.