Thursday, July 27, 2017

Adaptive Query Optimization in SQL Server 2017 and Azure SQL DB

   
     SQL Server 2017 comes with a new feature which can help to improve query performances. When SQL Server execute a query, it generates couple of query plans, It estimates the cost of them and picks the lowest estimated plan to use. Query execution takes the lowest estimated plan and uses it to run a query. But in some cases chosen plan might not be the best one to run the query. Number of rows estimation might be wrong which might cause memory estimation to be wrong. If the memory estimation is too high, SQL Server might try to use more memory than it needs before it runs the query. Or if it's too low, it might start to use the disk since it doesn't register enough memory to handle the number of rows.

     Adaptive Query Optimizer options are here to help us these situations. Let's see how Adaptive Query Optimizer works. To enable the Adaptive Query Optimizer, you need to set the compatibility level to 140
ALTER DATABASE dbname SET COMPATIBILITY_LEVEL = 140;
  1. Interleaved Execution
      During the query optimization, the cardinality estimation process is responsible for providing row count estimates for operators in order to derive estimated costs. The estimated costs help determine which plan gets selected for use in execution. If cardinality estimates are incorrect, we will still end up using the original plan despite the poor original assumptions.

   
     SQL Server 2017 changes this, During the optimization process, if  we encounter a candidate for interleaved execution, which for this first version will be multi-statement table valued functions (MSTVFs), we will pause optimization, execute the applicable subtree, capture accurate cardinality estimates and then resume optimization for downstream operations.

     When MSTVF is found in a query, SQL Server estimates its cardinality as 100 in 2014 and 2016 versions. So what happens if MSTVF returns 200.000 rows rather than 100?  Cached Query plan becomes useless and it starts to affect the performance.

       2.  Batch Mode Memory Grant Feedback

    SQL Server uses memory to store in-transit rows for hash join and sort operations. When a query execution plan is compiled for a statement, SQL Server estimates both the minimum required memory needed for execution and the ideal memory grant size needed to have all rows in memory. This memory grant size is based on the estimated number of rows for the operator and the associated average row size. Performance suffers If the cardinality estimates are inaccurate.

    SQL Server 2017 recalculates the actual memory required for a query and then updates the grant value for the cached plan. When an identical query statement is executed, we will be able to use the revised memory grant size, reducing excessive memory grants that impact concurrency and fixing under-estimated memory grants that cause expensive spills to disk.

       3.  Batch Mode Adaptive Join
 
      SQL Server do not know up front what the optimal join should be, for example, with a parameter sensitive query where a low or high number of rows may flow through the plan based on the actual parameter value.   This option is responsible to find and fix the bad join types. If the row count of the build join is small, SQL Server picks nested loop join otherwise it picks the hash join type for the query plan.

  These new ways should help to the quality of SQL Server query plans. With Automatic Tuning options in SQL Server 2017, your query plans can always be in good shape with using these new methods.

1 comment:

  1. • Nice and good article. It is very useful for me to learn and understand easily. Thanks for sharing your valuable information and time. Please keep updatingAzure Online Training India

    ReplyDelete