Introduction

In the previous part of the article, the T-SQL windowing functions were introduced. An overview of the syntax was given, along with the purposes and differences of the various available functions. A distinction was made between what was available in SQL Server 2005 and what extra functionality was introduced in SQL Server 2012. This part of the article will focus on performance improvements, such as indexing, and on a few given use cases where windowing functions can be used to solve particular problems.

MCSE Training – Resources (Intense)

Performance Improvements

Test setup

In order to test performance, a transaction history table will be created. First of all, we need some transactions. These can be generated with the following code:

IF OBJECT_ID('Tempdb..#Transactions') IS NOT NULL
	DROP TABLE #Transactions;

WITH T0 AS (SELECT N	 FROM (VALUES (1),(1)) AS tmp(N))
	,T1 AS (SELECT N = 1 FROM T0 AS a CROSS JOIN T0 AS b)
	,T2 AS (SELECT N = 1 FROM T1 AS a CROSS JOIN T1 AS b)
	,T3 AS (SELECT N = 1 FROM T2 AS a CROSS JOIN T2 AS b)
	,T4 AS (SELECT N = 1 FROM T3 AS a CROSS JOIN T3 AS b)
	,T5 AS (SELECT N = 1 FROM T4 AS a CROSS JOIN T4 AS b) -- over 4 billion rows
SELECT
	 TransactionID		= ROW_NUMBER() OVER (ORDER BY NEWID())
	,TransactionAmount	= RAND(N) * 100000 - 30000
INTO #Transactions
FROM
	(
	SELECT TOP 60000
		 N = ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
	FROM T5
	) tmp
ORDER BY TransactionID;

The common table expressions T0 through T5 generate a virtual table with over 4 billion rows (more on this later in the article). Using the subquery labelled tmp, we select 60,000 rows out of this virtual table, using the TOP clause. A row number is generated using the ROW_NUMBER function. Since the ORDER BY uses SELECT NULL, the optimizer knows it doesn’t actually have to sort the incoming rows. This means the row number is generated on the fly without any sorting.

Next a transaction ID is generated using again the ROW_NUMBER function, but this time the row number is generated based on the result of the NEWID function, guaranteeing a random order. The transaction amount is generated using the RAND function. This function generates pseudo-random values between 0 and 1. The result is multiplied by 100,000 and 30,000 is subtracted, giving us pseudo-random values between -30,000 and 70,000. The row number from the subquery is passed to the RAND function as a seed, in order to make sure a new value is generated for each row.

The reason a subquery is used to fetch 600,000 rows from the virtual table instead of using the TOP keyword in the final select is to make sure the set is limited first before we generate GUIDs and random values. You typically don’t want to do this over 4 billion rows.

Using the following code, we can create some accounts.

IF OBJECT_ID('Tempdb..#Accounts') IS NOT NULL
	DROP TABLE #Accounts;

WITH T0 AS (SELECT N	 FROM (VALUES (1),(1)) AS tmp(N))
	,T1 AS (SELECT N = 1 FROM T0 AS a CROSS JOIN T0 AS b)
	,T2 AS (SELECT N = 1 FROM T1 AS a CROSS JOIN T1 AS b)
	,T3 AS (SELECT N = 1 FROM T2 AS a CROSS JOIN T2 AS b)
SELECT TOP 100
	Account = CONCAT('Account',ROW_NUMBER() OVER (ORDER BY (SELECT NULL)))
INTO #Accounts
FROM T3;

Now we can cross-join the accounts and the transactions to generate a transaction history table with 6,000,000 rows. Each account will have the exact same transaction history, of course, but SQL Server doesn’t know that.

INSERT INTO [dbo].[TransactionHistory](Account,TransactionAmount,TransactionID)
SELECT
	 Account
	,TransactionAmount
	,TransactionID
FROM #Accounts
CROSS JOIN #Transactions;

The table takes about 196 MB space on disk.

The test query is the following one:

SELECT
	 Account
	,TransactionAmount
	,Balance = SUM(TransactionAmount) OVER (PARTITION BY Account
											ORDER BY TransactionID
											ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
FROM dbo.TransactionHistory
ORDER BY Account, TransactionID;

It displays the total transaction history for each account, along with the balance for that account at each transaction.

The query runs for about 32 seconds on my system and has quite a substantial amount of logical reads.

When we take a look at the query plan, we can see there was a clustered index scan and a very expensive sort operator.

The sort operator is there to sort the data for the OVER clause, according to the columns used in the PARTITION BY and the ORDER BY. The ORDER BY at the end of the query is for presentation purposes only, but because it uses the same columns as in the OVER clause, this one comes for “free.”

The query also makes use of parallelism on my machine, but this can depend on local server settings.

Indexing

Let’s tune this query to a more acceptable performance. The easiest solution is to add an index to the table that has the following characteristics:

  • The first index columns are those used in the PARTITION BY. In our case, that is Account.
  • The rest of the index columns are those used in the ORDER BY. Here that’s TransactionID in ascending order.
  • If any other columns are references in the query, they are kept in the index as included columns.

Those kinds of indexes have the nickname “POC-indexes,” for partition, order and covering.

For our test case, we create the following index:

CREATE NONCLUSTERED INDEX [IX_TransactionHistory] ON [dbo].[TransactionHistory]
(
	[Account] ASC,
	[TransactionID] ASC
)
INCLUDE ([TransactionAmount]);

When we run the query again, execution time drops to almost 23 seconds. There are also fewer logical reads involved.

The execution plan is now much simpler and the expensive sort has been eliminated:

However, the plan does not make use of parallelism, so this might explain why there isn’t a bigger difference in execution time.

The downside of such an index tailored to a windowing function is that it can only support windowing functions that use the same columns in the partition or order by clause. If the partition changes or the sorting is, for example reversed, the index becomes useless. Also, if you have multiple window calculations using different windows, the index can only support one. An alternative might be to have multiple indexes and to write a query for each windowing function only to join everything together at the end. This has a higher maintenance cost of course. As always, test thoroughly.

The APPLY Trick

As you may have noticed, the query using the created PO index didn’t apply parallelism. Through rewriting the query, there is a way to trick the optimizer into creating a parallel plan. This trick uses CROSS APPLY and was introduced by Adam Machanic (blog | twitter). It only works, though, when the windowing function has a partition by clause.

The idea is that there is that the set of objects used in the partition clause is stored in a separate table. In our case, that is the Accounts table. In data warehouse situations this is typically done in dimensions. Using this separate set, we can filter the original query using CROSS APPLY. Taking a look at the query will make more sense:

SELECT
	 a.Account
	,tmp.TransactionAmount
	,tmp.Balance
FROM #Accounts a
CROSS APPLY
	(
	SELECT
		 TransactionAmount
		,TransactionID
		,Balance = SUM(TransactionAmount) OVER (--PARTITION BY Account -- This is now removed from the query 
												ORDER BY TransactionID
												ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
	FROM dbo.TransactionHistory th
	WHERE th.Account = a.Account
	) tmp
ORDER BY Account, TransactionID;

The outer query selects all of the accounts from the Accounts table and then uses this to filter the original windowing query. The PARTITION BY clause is removed from the original query. The execution plan returned:

At the top right, we can see that the data is read out of the accounts table. This is used to perform index seeks against the transaction history table. However, execution time did not improve. In fact, it was a whopping 39 seconds!

One of the reasons is that the ORDER By at the end of the query is no longer free, which results in a very expensive sort operator. If we would remove the ORDER BY, we get the following result:

This time the query clocks off at 29 seconds. Better than the original query without index, but not exactly better than the original query with index. The reason for this might be the data distribution of the accounts versus the transactions, making the total numbers of index seeks less efficient than the index scan. However, displaying the total result set of 6,000,000 rows in management studio has a certain overhead. If we insert the result set directly into a table using SELECT INTO, the following execution times are achieved for the three queries:

  • * Original query without index: 12 seconds
  • * Original query with index: 10 seconds
  • * APPLY query with index: almost 4 seconds

To prove I did not make up that last number:

It is clear that the APPLY trick can have some benefits, but not in all cases. It has to be tested thoroughly. Another advantage of this trick is when the order by is reversed. Let’s say, for example, that we now want to calculate balance in reverse order (for whatever reason). The original query looks like this:

SELECT
	 Account
	,TransactionAmount
	,Balance = SUM(TransactionAmount) OVER (PARTITION BY Account
											ORDER BY TransactionID DESC -- reverse order
											ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
--INTO #Results
FROM dbo.TransactionHistory
ORDER BY Account, TransactionID;

The query now also runs at 38 seconds. The reason for this is that the index is now less useful because it cannot return the data in sorted order. This means again that an expensive sort operator has to be used. Because the final ORDER BY uses a different sorting, the plan even needs another sort operator! If the ORDER BY is removed, the query finishes at 30 seconds, slightly better than the original query without the index. This is because the index scan is a bit more effective than the clustered index scan.

However, the APPLY query is untouched by the reversing of the ORDER BY. The query finished at 23 seconds.

The query plan can do an index seek on Account and return the rows sorted according to TransactionID descending, because it can scan the rows in reverse order. This means that the index can be used in more scenarios than with the original query.

ROW versus RANGE

As I mentioned in the first part of the article, there can be a performance difference when using RANGE or ROWS in the frame extent. There are a few reasons, but the most important one is that, under certain circumstances, the Window Spool operator can use a highly optimized in-memory work table. However, if those conditions are not met, the query plan has to use a traditional slower on-disk work table, which can cause tremendous overhead.

Those conditions are:

  • The distance between two extreme points in the window is not bigger than 10,000 rows.
  • The number of rows in the frame can be calculated. This is not the case with RANGE.
  • The LAG/LEAD functions do not use an expression in the offset.

Even if the RANGE is identical to the ROWS—in the case the ordering is unique—the optimizer cannot detect this and it will have to use the on-disk work table. That’s why it is so important to replace the default RANGE with the possible much faster ROWS.

Let’s take a look again at the test query from the previous section, when using an index. It had the following IO statistics:

If we replace ROWS with RANGE in the query, we get the following:

The query is suddenly almost three times as slow. You can also see that the work table now has a lot of scans and logical reads, while previously those were 0. Avoid RANGE unless you absolutely need it.

Conclusion

In this part of the article about windowing functions, we introduced a few optimization techniques for queries with windowing functions:

  • Adding an index with index columns on the columns used in the PARTITION BY and ORDER BY. Other columns referenced in the query are added as included columns of the index.
  • Rewriting the query using CROSS APPLY to enforce parallelism. This might not always work and needs thorough testing.
  • Replacing the default RANGE with the more efficient ROWS in the frame extent.

For more detailed information on performance considerations for window functions, check out Chapter 4 of the very excellent book by Itzik Ben-Gan, Microsoft SQL Server 2012 High-Performance T-SQL Using Window Functions.

In the last part of the article, we will take a look at a few use cases for windowing functions.