Performance Woes & Wows #5: IN vs UNION

Querying the Same Thing. Faster.

In a previous post, we’ve seen that equivalent queries can produce distinct execution plans – Yes, how nice and intuitive of you SQL Server. But this happens for a logical reason: The Query Optimizer hates you doesn’t have the means to explore every possible semantic equivalence. Hence, the shape of the execution plan will be intrisically bounded to the way we write our queries.

In this post, we’ll extend this notion to a more real-world like example, which I found very interesting because it seemed kind of counterintuitive to me.


Let’s assume we want to count every Question, Wiki and TagWikiExerpt post, in the Posts table of the StackOverflow2010 database. In order to achieve that we can create the following index:

CREATE INDEX IX_PostTypeId ON Posts (PostTypeId)

And then execute this query:

FROM Posts 
WHERE PostTypeId IN (1,3,4)

We can confirm that the execution plan is comprised by a scan to the previously created index and has an estimated cost of 3.27277. The IO statistics show 1915 logical reads and regarding CPU time and duration, 129ms and 123ms, respectively. These time statistics remained very identical on all the performed executions.

the Same Thing.

Now let’s do the same thing, expecting different results. Well, almost the same thing. Let’s just change the previous query a little bit, by replacing the IN operator for this logical equivalent, but uglier and more complex version:

	SELECT COUNT(1) c FROM Posts WHERE PostTypeId = 1
	SELECT COUNT(1) c FROM Posts WHERE PostTypeId = 3
	SELECT COUNT(1) c FROM Posts WHERE PostTypeId = 4
) x

These changes are reflected in the execution plan, as expected:

I’m so ugly // But that’s okay, ‘cause so are you


The really interesting part about this plan is… That it’s better than the first one. Don’t believe me? Check for yourself:

This query only consumed 47ms of CPU time (-63.6%) and ran for a total of 76ms (-38.2%). The difference was not quite significant regarding logical reads, nevertheless a lower value was achieved with 1912. Accordingly, the estimated plan cost was also lower, 2.68443. Sure, some assumptions behind the cost calculation may be outdated and not entirely accurate, but this time, the prediction was entirely correct.

So What?

Rewriting queries using equivalent syntax seems like a valid approach when trying to improve query performance. Nevertheless, I don’t know any set of rules or even guidelines that may help in this decision process. It’s more like an empiric procedure, one day on the threshold of despair you try a thing that unexpectedly works and later in the future when a similar pattern emerges, you’ll know that the time has come. Why it works? Because of reasons. But if it works, it ain’t stupid. Best advice of the year.

Not All Seeks Are Created Equal #2

As we saw in the previous post, Index Seek operations can look for single rows (Singleton Seek) or specific subsets of rows (Range Seek).

Singleton Seek

A Singleton Seek occurs when just one single row, at most, will satisfy the search requirements that are related to the Seek Key.

The IO statistics for the same execution presented above, will show a scan count of zero, also confirming the Singleton Seek occurrence.

The following image depicts a similar seek operation, as exemplified by the red arrows:

In this case, the B-tree is traversed from the root to the leaf node and the specific data page is fetched.

Range Seek

Conversely, a Range Seek will occur when SQL Server assumes that more than one row can be retrieved by the Seek Key.

The following image depicts an equivalent seek operation, as exemplified by the red arrows:

Similarly to a Singleton Seek, the B-tree is also traversed to a specific leaf node, but in this case, this leaf node will be either the first or the last key within range (depending on the scan direction) and then the remaining leaf nodes within range limit will be scanned.

And that’s all he wrote.

Not All Seeks Are Created Equal #1

Seeks seeks seeks… The number of the beast. Seeks are cool though. I like seeks. Sometimes, at least. Perhaps some seeks are cooler than others. I just said seeks six times, now we’re ready to go.

Let’s begin by creating a temporary table that holds values from 1 to 10 000.


WITH n(n) AS


Now, let’s assume that we need to retrieve a specific range of values – e.g. between 500 and 563. There are a couple of ways to do that, that’s not exactly rocket science, but let’s consider the following two approaches:

Using the BETWEEN operator:

	Id BETWEEN 500 AND 563

Using the IN operator:

	Id in

Uh, truly fascinating.

But, behind these boring and functional equivalent queries, there is a dirty seekret – Wow, I’m on fire today. Both result in a Clustered Index Seek, but one is sus. Discuss!

That’s quite a difference. The second query results in 64 times more logical reads. But why?

One Seek To Rule Them All

In the execution plan of the query that uses the BETWEEN operator, it’s possible to notice that there’s only one seek operation, which range starts at 500 and then traverses the leaf level of the index, until it finds a row beyond the range limit – 563 in this case.

The Meeseeks Box

The seek #2, in the execution plan of the query that uses the IN operator is rather different:

The list continues down to a total of 64 individual seeks.

The underlying reason for the 128 logical reads is related to the size of our table and consequently, the size of the index. Since it is large enough to need a separate root page, each one of the 64 seeks will imply 2 logical reads.

But I digress, the bottom line here is that Seek #2 is the impostor. It is not actually a seek, but a whole bunch of them.