Not All Seeks Are Created Equal #1

Seeks seeks seeks… The number of the beast. Seeks are cool though. I like seeks. Sometimes, at least. Perhaps some seeks are cooler than others. I just said seeks six times, now we’re ready to go.

Let’s begin by creating a temporary table that holds values from 1 to 10 000.

CREATE TABLE #T(id integer PRIMARY KEY CLUSTERED);

WITH n(n) AS
(SELECT 1 UNION ALL SELECT n+1 FROM n WHERE n < 10000)

INSERT INTO #T(id) 
SELECT n FROM n ORDER BY n
OPTION (MAXRECURSION 10000);

Now, let’s assume that we need to retrieve a specific range of values – e.g. between 500 and 563. There are a couple of ways to do that, that’s not exactly rocket science, but let’s consider the following two approaches:

Using the BETWEEN operator:

SELECT 
	Id 
FROM 
	#X
WHERE 
	Id BETWEEN 500 AND 563

Using the IN operator:

SELECT 
	Id 
FROM 
	#X
WHERE 
	Id in
	(
		500,501,502,503,504,505,506,507,508,509,510,511,512,
		513,514,515,516,517,518,519,520,521,522,523,524,525,
		526,527,528,529,530,531,532,533,534,535,536,537,538,
		539,540,541,542,543,544,545,546,547,548,549,550,551,
		552,553,554,555,556,557,558,559,560,561,562,563
	)

Uh, truly fascinating.

But, behind these boring and functional equivalent queries, there is a dirty seekret – Wow, I’m on fire today. Both result in a Clustered Index Seek, but one is sus. Discuss!

That’s quite a difference. The second query results in 64 times more logical reads. But why?

One Seek To Rule Them All

In the execution plan of the query that uses the BETWEEN operator, it’s possible to notice that there’s only one seek operation, which range starts at 500 and then traverses the leaf level of the index, until it finds a row beyond the range limit – 563 in this case.

The Meeseeks Box

The seek #2, in the execution plan of the query that uses the IN operator is rather different:

The list continues down to a total of 64 individual seeks.

The underlying reason for the 128 logical reads is related to the size of our table and consequently, the size of the index. Since it is large enough to need a separate root page, each one of the 64 seeks will imply 2 logical reads.

But I digress, the bottom line here is that Seek #2 is the impostor. It is not actually a seek, but a whole bunch of them.

Query Performance Baselining With Query Store

When we’re fine-tuning queries individually, it’s easy to observe and even measure accurately the improvements that specific changes have in specific executions. Nevertheless, when changes are deployed to production, they are put to test against a much more complex environment, where all sorts of crazy things happen concurrently and users on a personal quest to destroy your work will try to make unexpected things happen. Still, even if things go according to plan, it’s important to translate those improvements into numbers, for a couple of reasons.

Query Store is not the only tool in the toolbox, but if you choose to establish a baseline using, let’s say Extended Events, you need to set up the respective session and, obviously, the baseline will be restricted to the extent of that respective session. If data is collected during a small period of time, there’s a risk to catch an abnormal pattern, hence distorting the results. On the other hand, long running sessions must be even more carefully configured not to introduce undesired overhead. Extended Event sessions are just as light-weight as the data we’re collecting. I won’t even mention Trace/Profiler, but I heard that there is a special place in hell for anyone still using it.  

I Know What You Queried Last Summer

Don’t get me wrong, Extended Events are awesome, but specifically for baselining, I personally find Query Store to be more practical. If the queries executed in the database are compatible with the healthy lifestyle of Query Store just go for it (e.g. ad-hoc queries are like trans fats that can quickly force your Query Store to appear on a “My 600 lb. Life” episode). After the initial setup where a couple of configurations regarding what’s captured and for how long are defined, you will never have to worry about setting up sessions, or managing the underlying data again. You’ll have relevant historical data always at your disposal that goes as far as the chosen settings allow.

Sauce

The query bellow will show relevant execution stats for a particular procedure, during the chosen time interval:

DECLARE @StartDate	DATETIME = ''
DECLARE @EndDate	DATETIME = ''
DECLARE @ObjName	VARCHAR(100) = ''

SELECT 
    p.query_id query_id,
    q.object_id object_id,
    ISNULL(OBJECT_NAME(q.object_id),'''') object_name,
    qt.query_sql_text query_sql_text,
    ROUND(CONVERT(float, SUM(rs.avg_duration*rs.count_executions))/NULLIF(SUM(rs.count_executions), 0)*0.001,2) avg_duration,
    ROUND(CONVERT(float, SUM(rs.avg_cpu_time*rs.count_executions))/NULLIF(SUM(rs.count_executions), 0)*0.001,2) avg_cpu_time,
    ROUND(CONVERT(float, SUM(rs.avg_logical_io_reads*rs.count_executions))/NULLIF(SUM(rs.count_executions), 0)*8,2) avg_logical_io_reads,
    ROUND(CONVERT(float, SUM(rs.avg_logical_io_writes*rs.count_executions))/NULLIF(SUM(rs.count_executions), 0)*8,2) avg_logical_io_writes,
    ROUND(CONVERT(float, SUM(rs.avg_physical_io_reads*rs.count_executions))/NULLIF(SUM(rs.count_executions), 0)*8,2) avg_physical_io_reads,
	SUM(rs.count_executions) count_executions,
    ROUND(CONVERT(float, SUM(rs.avg_clr_time*rs.count_executions))/NULLIF(SUM(rs.count_executions), 0)*0.001,2) avg_clr_time,
    ROUND(CONVERT(float, SUM(rs.avg_dop*rs.count_executions))/NULLIF(SUM(rs.count_executions), 0)*1,0) avg_dop,
    ROUND(CONVERT(float, SUM(rs.avg_query_max_used_memory*rs.count_executions))/NULLIF(SUM(rs.count_executions), 0)*8,2) avg_query_max_used_memory,
    ROUND(CONVERT(float, SUM(rs.avg_rowcount*rs.count_executions))/NULLIF(SUM(rs.count_executions), 0)*1,0) avg_rowcount,
    COUNT(distinct p.plan_id) num_plans
FROM 
	sys.query_store_runtime_stats rs (NOLOCK)
JOIN 
	sys.query_store_plan p ON p.plan_id = rs.plan_id 
JOIN 
	sys.query_store_query q ON q.query_id = p.query_id
JOIN 
	sys.query_store_query_text qt ON q.query_text_id = qt.query_text_id 
JOIN 
	sys.query_store_runtime_stats_interval i ON rs.runtime_stats_interval_id = i.runtime_stats_interval_id
WHERE 
	i.start_time BETWEEN @StartDate AND @EndDate AND
	OBJECT_NAME(q.object_id) = @ObjName
GROUP BY 
	p.query_id, qt.query_sql_text, q.object_id
HAVING 
	COUNT(distinct p.plan_id) >= 1
order by 
	AVG_CPU_TIME DESC


Simply run it for both desired time intervals and feel the joy of baselining query performance without breaking a sweat.

Isolation Levels Overview

What a better time to talk about isolation than now…

SQL Server implements six distinct isolation levels that are fundamentally distinguished by the way they approach potential problems caused by concurrency. Each isolation level maintains a specific balance between integrity and performance, that is materialized by the occurrence, or not, of certain concurrency phenomena.

  • Dirty Read: Occurs when a transaction reads data that was modified, but not commited, by another concurrent transaction.
  • Non-Repeatable Read: Occurs when, during the course of a transaction, a row is retrieved twice and the values within the row differ between reads.
  • Phantom Read: Occurs when, in the course of a transaction, new rows are added or removed by another transaction to the records being read.

The ANSI/ISO standard SQL 92 defines the rules for each isolation level, specifying which concurrency phenomena may, or may not, be observed in each.  

Isolation LevelDirty
Reads
Non-Repeatable
Reads
Phantoms
SerializableNoNoNo
SnapshotNoNoNo
Repeatable ReadNoNoYes
Read Commited (Snapshot Isolation)NoYesYes
Read Committed (Locking Read)No YesYes
Read UncomittedYesYesYes

Serializable

Serializable is the most isolated of the available isolation levels. A serializable execution is defined to be an execution of the operations of concurrently executing SQL-transactions that produces the same effect as some serial execution of those same SQL-transactions [1]. This implies that increasing concurrency is allowed, as long as the effects of those transactions still correspond to some possible order of serial execution. Conceptually, the serializability requirement is sufficient to exclude dirty reads, non-repeatable reads and phantom concurrency phenomena.

One funny fact regarding this isolation level is that, if you have a transaction T1 that started counting a table with 600 rows and, before finishing, another transaction T2 added 66 rows, both 600 and 666 would be logically correct results for T1. In SQL Server implementation of Serializable isolation level, transactions do not necessarily have a point-in-time view at the start of the transaction, but rather the latest committed data, as of the moment that the data was first locked for access.

Snapshot

A transaction under Snapshot isolation level behaves as if it was executed against its own private copy of the committed state of the database, taken at the moment the transaction started[2].

By providing a point-in-time view of committed data, snapshot isolation provides protection against all above mentioned concurrency phenomena. Dirty reads are avoided by reading only committed data and the static nature of snapshots prevents both non-repeatable reads and phantoms. However, snapshots may not reflect the current state of the database, as these transactions ignore any committed changes made by other concurrent transactions after the snapshot transaction begins.

Regarding data changes, conflicts are automatically handled. A snapshot write conflict occurs when a snapshot transaction T1 attempts to modify a row that has been modified by another transaction T2, after T1 began. When this happens, the last committer transaction changes are rolled back.

Snapshot isolation is vulnerable to write skews, which occur when two concurrent transactions read data that the other modifies. No write conflict occurs because each modify a distinct row and neither one sees the changes made by the other, because both are reading from a point-in-time previous to the one where changes were made.

Repeatable Read

The repeatable read isolation level provides guarantees that data will not change for the life of the transaction once it has been read for the first time[3], but, ironically, does not actually guarantee that reads are repeatable.

The potential issues with Repeatable Read transactions are fundamentally associated with the possibility for the occurrence of phantom reads. For example, while a transaction T1 is counting the rows of a table, another transaction T2 may insert new rows in the range already processed by T1. The shared locks taken by transaction T1 on previously-read rows will prevent those rows from being changed, but don’t prevent new rows from being inserted into the range of values already processed. A similar scenario may happen if T2 updates a row not yet processed by T1 in a way that the new value fits somewhere in the range already processed by T1.

Read Commited (Locking Read)

A transaction running under this isolation level reads only committed data, which by definition excludes dirty reads.

Regarding the locking mechanism, at row level, the shared locks are released when a shared lock is taken on the next row. At page level, the shared locks are released when the first row on the next page is read, and so forth. This differs from Repeatable Read isolation, which also takes shared locks when reading data, but will hold locks to the end of the enclosing transaction.

Despite only committed data is read in Read Committed isolation level, this doesn’t mean it will be the most recently committed data, in fact, it may be out-of-date. Additionally, the standard itself states that non-repeatable reads and phantoms may occur.

Since only a single row is share-locked at any given moment in time, it is clearly possible for concurrent transactions to modify the unlocked rows in the index our query is traversing. If these concurrent modifications change index key values, they will cause rows to move around within the index structure[4]. This means that there is no point-in-time view, and rows can be missed or counted twice.

Read Commited (Snapshot Isolation)

The Read Committed Snapshot Isolation (RCSI) provides a point-in-time view of committed data, from the time the statement began, contrarily to Locking Read Committed, in which it refers to the moment each row is briefly locked and physically read. A point-in-time view cannot suffer from the problems of missing rows or encountering the same row multiple times, which are both possible under locking read committed isolation[5].

In RCSI transactions, data is read from the row version store rather than being accessed directly, so no shared locks are acquired. This may improve concurrency by eliminating conflicts between incompatible locks, but also requires more resources, in terms of memory/disk space and tempdb.

Due to the static nature of a snapshot, while a statement runs it will continue to see the committed state of the database as it was when the it began, potentially missing all the committed changes that occurred since then.

In RCSI transactions, each statement sees a static committed data set, but that set can change between statements inside the same transaction because there is nothing to prevent the real data from being changed while the RCSI statement is executing. There are no shared locks, and that’s why RCSI does not provide protection from non-repeatable reads and phantoms: SQL standard phenomena are defined at the transaction level. Multiple statements within a transaction running at RCSI may see different data, because each statement sees a point-in-time view as of the moment that particular statement started.

Read Uncomitted

Read Uncommitted offers the fewest isolation guarantees. The previous concurrency effects saw in Locking Read Committed are all due to the locking implementation only taking very short-term shared locks when reading data. The read uncommitted isolation level goes one step further, by not taking shared locks at all, resulting in the additional possibility of dirty reads[6].

Uncommitted isolation is often used to achieve better performance through the reduction of blocking and deadlocking. By avoiding the need to acquire and release shared locks, incompatible lock problems are minimized.

But, besides the obvious drawback of reading uncommitted data that may never be committed, using this isolation level tells SQL Server that we allow any inconsistencies that might arise as the result of allocation-ordered scans, which typically are only used if there’s a guarantee that the underlying data won’t change during the scan. This may lead to data being missed, counted more than once, or even be completely arbitrary. Paul White gives an interesting example:

“(…) consider a single varchar(max) column that initially contains 10,000 ‘x’ characters. A concurrent transaction updates this value to 10,000 ‘y’ characters. A read uncommitted transaction can read ‘x’ characters from one page of the LOB, and ‘y’ characters from another, resulting in a final read value containing a mixture of ‘x’ and ‘y’ characters. It is hard to argue that this does not represent reading “corrupt” data.”

That’s My Story And I’m Sticking To It

From what we briefly saw about SQL Server’s isolation levels, the following can also be depicted:

Isolation LevelMiss
Commited
Rows
Count
Same Row
Multiple Times
Point-In-Time
View
SerializableNoNoNo
SnapshotYesNoYes
Repeatable ReadYesNoNo
Read Commited (SI)YesNoYes
Read CommitedYesYesNo
Read UncomittedYesYesNo

The more interesting aspect for me is that in every isolation level, except Serializable, committed rows can be missed, and this exception exists just because of implementation-specific details. As we previously saw, in the case where the logically correct result could be either 600 or 666 rows, the first one would imply that posteriorly committed rows were missed, and still, none Serializable requirements  would be disregarded.

With that, if you are okay with the fact that actions may be taken based on obsolete data, good for you. If not, go Serializable or go buy good sleeping pills. You’re welcome.  

Performance Woes & Wows #3 Full-Text Search

In the previous post of Performance Woes & Wows, we’ve discussed the reason why using a LIKE predicate along with a leading wildcard can really hurt the performance of our queries. Accordingly, we saw how the use of just a trailing % wildcard will produce much more efficient searches.

Nevertheless, it’s likely that we can’t restrict every LIKE search to use just a trailing wildcard, particularly in cases where it’s required to confer a greater level of flexibility to searches. Under those circumstances we have a couple of alternative approaches that can confer more flexibility, comparing to predicates with LIKE and just a trailing wildcard, and that perform significantly better than the use of LIKE with both leading and trailing wildcards. In this post, we’re going to talk about one of those possible approaches: Full-Text Search (FTS).

Full-text queries perform linguistic searches over text data in full-text indexes, based on the rules of a particular language. These queries can search for simple or prefix terms, generations terms (inflectional forms of a word), proximity terms, weighted terms and even synonyms for a specific word. In order to accomplish this, a set of predicates (CONTAINS / FREETEXT) and functions (CONTAINSTABLE / FREETEXTTABLE) are used.

Full-Text Querying Process

Inside SQL Server, Full-Text has a separate engine to which the Query Processor sends the full-text portions of a query for processing. The Full-Text Engine can perform the following actions:

  • Word Breaking: The word breaker identifies individual words by determining word boundaries, based on the lexical rules of the language. Each word (also known as token) is inserted into the full-text index using a compressed representation to reduce its size.
  • Stemming: The stemmer generates inflectional forms of a particular word, e.g. “drinking”, “drank”, and “drunk” are various forms of the word “drink”.
  • Stopword Processing: Stopwords comprehended by a stoplist can be words with meaning in a specific language or tokens without linguistic meaning. For example, in the English language, words such as “a,” “and,” “is,” or “the” are left out of the full-text index since they are known to be irrelevant in a search context.
  • Thesaurus Expansions: FTS queries can search for synonyms of user-specified terms through the use of a Full-Text Search thesaurus. Each thesaurus defines a set of synonyms for a specific language.

The previously mentioned full-text portions are represented in the form of SQL operators that during execution will access the Full-Text Index to retrieve the results.

Original image from docs.microsoft.com

Creating Required Objects

Before executing full-text queries, we must create a Full-Text Catalog and Full-Text Indexes to support our searches.

Full-Text Catalog

The first object we’re going to create is the Full-Text Catalog, which is simply a logical container for Full-Text Indexes:

CREATE FULLTEXT CATALOG DefaultCatalog AS DEFAULT;  

Full-Text Index

The information in a Full-Text Index is used by the Full-Text Engine to compile the respective queries and quickly search a table for particular word, or combinations of words. The process of building a Full-Text Index differs from building other types of indexes, instead of constructing a B-tree structure based on a value stored in a particular row, the Full-Text Engine builds an inverted, stacked, compressed index structure, based on individual tokens from the text being indexed.

Enough chit-chat, let’s create our index.

CREATE FULLTEXT INDEX ON dbo.Entities
(   
     Name      Language 1033,  
     VATNumber Language 1033      
)   
KEY INDEX PK_Entities ON DefaultCatalog
WITH STOPLIST = SYSTEM,CHANGE_TRACKING AUTO;   
GO  

The index was created on the table Entities, over the columns Name and VatNumber and the Language 1033 refers to the English language. On KEY INDEX, PK_entities is specified, which is the clustered index (integer identity column) of the table. Note that a FTS KEY must be unique, non-nullable, single-column index which is not offline, is not defined on a non-deterministic or imprecise non-persisted computed column, does not have a filter, and has maximum size of 900 bytes. The index is contained by the previously created Catalog and uses the system Stoplist. Finally, change tracking is set to automatic, which means that after the initial full population is completed, changes are tracked and propagated automatically.

It’s important to refer that only one full-text index is allowed per table.

Clash of The Predicates

In contrast to FTS, the LIKE predicate works on character patterns only, so it will be interesting to compare the results versus a FTS predicate, in this case CONTAINS. The table Entities that will be queried by both predicates has a total of 550 000 rows.

LIKE

First, we create a store procedure that receives a varchar parameter, which will be used to query the columns Name and VATNumber of the table Entities.

CREATE PROCEDURE GetEntityByNameOrVatNumber
(@SearchText as VARCHAR(300))
AS
BEGIN
	SELECT
		[Name],
		[VATNumber]
	FROM
		[dbo].[Entities]
	WHERE
		[Name] LIKE @SearchText OR
		[VATNumber] LIKE @SearchText
END

The string ‘%EASY-R%’ with leading and trailing wildcards is passed and one result is retrieved.

In order to perform the search, SQL Server has done 5 755 logical reads and consumed more than 2 seconds of CPU.

CONTAINS

Now, let’s create an equivalent store procedure that will use CONTAINS, instead of LIKE:

CREATE PROCEDURE GetEntityByNameOrVatNumber_FTS
(@SearchText as VARCHAR(300))
AS
BEGIN
	SELECT
		[Name],
		[VATNumber]
	FROM
		[dbo].[Entities]
	WHERE
	        (CONTAINS(([Name],[VATNumber]),@SearchText))
END

This time the input string will be “*EASY-R*”, which more closely mimics the behavior of the previous search. This execution produces the exact same result.

But this time, SQL Server only does 4 logical reads and consumes just 188 ms of CPU!

All Hail the King?

It’s obvious that the FTS is the winner of this duel, improving logical reads by 99.93% and CPU consumption by 90.74%. Lowering CPU from more than 2 seconds to less then 0.2 seconds is really impressive, as this will greatly increase the capacity of our server to manage concurrent searches.

“This is perfect, let’s replace all LIKE predicates in the world”

Weeeeeell, hold on dear reader. When we used the CONTAINS predicate and changed the string pattern, I said that it was the one that more closely mimics the LIKE search with both wildcards. This means that they are not the same, hence not always producing the same results, which often can generate confusion.

The key here relies on understanding these differences, so we can accurately identify where we can take advantage of FTS. In a future post we will further discuss and uncover these differences.

Is The Statistics Sampling Process Deterministic?

In a previous post we’ve discussed the importance of statistics, its underlying structure and explored how SQL Server uses them to come up with execution plans. We’ve saw that the query optimizer computes a histogram on the column values in the first key column of the statistics object, that measures the frequency for each distinct value in a data set.

But, the information that these histograms hold is intrinsically related to the chosen sample universe. In practice, SQL Server will either do a full scan over the entire column or estimate the value distribution through the extrapolation of a statistical sample. In this post, we’re going to very briefly determine whether the process of statistics sampling is deterministic or not, i.e. if under the same premises, the chosen sample varies, or not. Place your bets now.

Let’s start by creating a table T, with one clustered index, one non-clustered index. Then, we’ll populate the State column, with random values from a range that may vary from 1 to 5.

CREATE TABLE T (Id INT IDENTITY, [State] TINYINT)
CREATE CLUSTERED INDEX CIX_T ON T (ID)
CREATE NONCLUSTERED INDEX IX_T_State ON T ([State])

INSERT INTO T SELECT FLOOR(RAND()*(5-1+1)+1);
GO 28595716           

Now, if we update any statistics using the FULL SCAN option, the histogram will reflect exactly the current value distribution for the State column and we can easily confirm it by running DBCC SHOW_STATISTICS:

UPDATE STATISTICS T IX_T_State WITH FULLSCAN
DBCC SHOW_STATISTICS ('T','IX_T_State') WITH HISTOGRAM

And then comparing the results with the actual values in the table

SELECT [State],COUNT(1) Occurrences 
FROM T 
GROUP BY [State]

This level of accuracy allows SQL Server to do better choices regarding execution plans, operators and specific algorithms. However, when our tables have hundreds of millions or even billions of rows, we may be forced to loosen up a little bit, in order to fit these operations in the available maintenance window. Additionally, even if we’re able to do a full scan in order to populate the histogram with 100% of accuracy, if the modification threshold is posteriorly reached, a much lower sample will be used (typically) to rebuild it. When the Query Optimizer determines that statistics might be out-of-date, it updates them (when they are needed for a query plan). But it’s usually possible to improve query performance by updating statistics more frequently and with a greater sample than what auto update statistics would do. Updating statistics ensures that queries compile with up-to-date statistics, but be aware that updating statistics causes queries to recompile.

Going back to our T table, if we update the IX_T_State index that has 28 595 716 rows, letting SQL Server determine the sample size, we’ll see that a sample of 2.68% was picked and the statistics will hold the following data:

In this case, using such a small sample percentage, SQL Server has done a pretty awesome job extrapolating the results, but that’s beyond the point now. What we want to confirm is if a new update, over the same data universe, will lead to a distinct histogram, or not:

The results show that both histograms are equal. Changing the sample percent and the table size won’t alter the behavior. SQL Server uses a deterministic sampling process.

The Deceiving N

In this post, we’ll briefly look into a common misconception about nchar and nvarchar data types.

We begin by setting the breakfast table:

CREATE TABLE Breakfast
(
    Id SMALLINT IDENTITY(1,1),
    Product NVARCHAR(10)
)
INSERT INTO Breakfast (Product) VALUES ('Caviar'),('Blinis')

Later, someone spots our mistake creating the breakfast menu and tries to apply the only logical correction to it:

UPDATE Breakfast SET Product = 'Beluga Gold Vodka' WHERE Product = 'Blinis' 

But, despite the good thinking and good taste, the attempt fails.

Furthermore, looking into the Breakfast table definition can make thinks even more confusing, because the Lenght value is slighty greater than the inputed string.

It’s important to understand that in NVARCHAR(n) and NCHAR(n), the n doesn’t correspond to the number of characters that can be stored, but rather the string length in byte-pairs (from 1 to 4000). This misconception is probably explained by the fact that in characters defined within the Unicode range of 0 to 65 535, one character is stored per byte-pair. However, in higher Unicode ranges (65 536 to 1 114 111) one character may use two byte-pairs.

In our case, one character is being stored per byte-pair, but the discrepancy that we saw in the column information, more specifically in its lenght can lead to some confusion. Since we’ve initially set our n to be 10 in the Product column, those 10 byte-pairs will correspond and justify the value of 20 presented in the object definition for the column lenght. However, because our NVARCHAR column uses 2 bytes per character, the maximum number of characters that can be stored is 10.

If we run the following query:

SELECT
	column_name,
	character_maximum_length,
	character_octet_length
FROM 
	information_schema.columns
WHERE 
	table_name = 'Breakfast' AND
	Column_name ='Product'

The character_maximum_length and character_octet_length for the Product column will confirm exactly what we’ve just saw.

Now the modifications to allow a proper breakfast can be made.

Za zdarovye!

Wait Stats#1: SOS_SCHEDULER_YIELD

In order for you to have a happy life as someone responsible for a database, you need to ensure that it follows a strict diet consisting of well-designed queries and indexes, plenty of CPU, memory and disk capacity. Neglecting any of these aspects will compromise the entire system and consequently your happiness.

Regarding improving CPU usage, we can roll up our sleeves and start tuning our server, our queries and indexes, so they perform better and use less CPU. This is the preferable approach, because the alternative of adding processing power to SQL Server can be quite expensive. Nonetheless, it’s possible to reach a point where your CPUs are simply not enough to support your workload and tuning queries will feel like rearranging chairs on a sinking Titanic.

In theory there are two options when increasing CPU capacity:

  • Adding more CPUs, hence increasing the thread pool and improving parallel queries.
  • Get faster CPUs, which will improve serial query speed and thread consumption

In practice, the second option is becoming gradually less common because most people are in the cloud and using virtual machines, which typically imply less flexibility.

But, in order to confirm that the CPU is really the bottleneck, we need to have a closer look. Wait stats are a great starting point and in this specific matter, wait types like SOS_SCHEDULER_YIELD, CXPACKET, or THREADPOOL can give us pretty useful hints of what’s happening. It’s important to stress that waits are not intrinsically bad, they are an indivisible part of SQL Server functioning and we must be able to separate evil waits from benign waits.

In this post, we’re going to overview SOS_SCHEDULER_YIELD wait type. However, to have a clearer understanding about the underlying information, we must grasp some basic concepts, namely schedulers and the query lifecycle.

Schedulers and Query Lifecycle

A scheduler is the logical equivalent of a CPU, where worker threads are scheduled to actively carry out the tasks assigned to them. We can view scheduler information using the DMV sys.dm_os_schedulers. Inernally, schedulers have three components:

  • Processor: Phisycal or logical CPU core that is responsible for processing the thread. Each core will process only a thread at a time.
  • Waiter List: List that contains all threads that are waiting for a resource to became available. The threads in this list are in Suspended state (we’ll talk more about these states shortly). We can view information about these resource waits in the DMV sys.dm_os_waiting_tasks.
  • Runnable Queue: Encompasses all threads that are waiting for processor time (Runnable state). When priorities are not assigned to workload groups the processing order on this queue is First-In-First-Out. (more info on sys.dm_exec_requests).

Accordingly, these components are directly related to the states encompassed by the query lifecycle:

  • Running: Query is being processed on a CPU.
  • Runnable: Query is ready to execute, but it’s waiting for CPU time.
  • Suspended: Query is waiting on a particular resource to become available.

During one execution, a query can go through these states multiple times. This behavior is enforced by the cooperative (or non-preemptive) scheduling used by SQL OS. Basically, only one thread will be running on the scheduler at a given moment and each one has a quantum of 4 milliseconds, which represents the maximum consecutive CPU time for *any* of them. After these 4 milliseconds, the thread voluntarily yields the CPU and moves back into the Runnable state. Even if there aren’t other threads waiting, nor the necessity for the thread to wait for any resource (and consequently transitioning into Suspended state), the quantum is not exceeded.

It’s important to mention that preemptive function calls work differently, they are selfish and don’t like to cooperate. They are nothing more than calls to functions outside the realm of SQL OS, like Win32 API functions, where the waiting logic rules no longer apply. Under these circumstances, and before the external function is invoked, a sibling worker is signaled and given ownership of the scheduler, while the current thread continues to run. Then, when the thread goes preemptive, its quantum “ends” and no additional accounting is done for CPU usage until it reverts to non-preemptive scheduling. When the preemptive work is done and the thread transitions into cooperative scheduling again, it declares the end of its preemptive wait and goes into the runnable queue. If you want to know more about preemptive scheduling check this awesome blog post from Ewald Cress.

SOS_SCHEDULER_YIELD

This wait type occurs when a thread voluntarily yields the scheduler for other tasks to execute, waiting for its quantum to be renewed.

Since the thread doesn’t wait for any resource and it just voluntarily goes into the runnable queue, SOS_SCHEDULER_YIELD won’t imply resource wait time. This will create the signal wait time that will show how long the thread has waited in the runnable queue to get back in the running state again.

Problem?

Probably not. As we saw, SOS_SCHEDULER_YIELD just indicates that threads are exhausting their quantum, hence generally being a benign wait type. Even if this wait type is prevalent, if signal wait time is low, that just means that there are multiple threads running on the scheduler and you should look somewhere else in order to find the root case of your problems.

Nonetheless, when signal wait time is high things can get interesting. Frequent SOS_SCHEDULER_YIELD waits with high signal wait time may suggest that a substantial number of CPU intensive queries are fighting for CPU and each ends up waiting a long time to be signaled. In this case it’s probably a good idea to look for the top CPU consuming queries and analyze their execution plans.

Contrarily, less frequent waits with high signal time indicates that there are not many threads fighting for CPU time. In such case, an external source may be causing CPU pressure, either from preemptive function calls or something completely external to SQL Server.

The Execution Plan Saga #2: Scans

Scan operations are processed on account of data reading operators that differ according to the underlying data structure:

  • Clustered Index Scan: Scan is done on the leaf-level pages of a Clustered Index
  • Index Scan (NonClustered): Scan is done on the leaf-level pages of a Non-Clustered Index
  • Table Scan: Scan is done on a Heap (table without a Clustered Index)

A Scan often implies reading all pages of a specific index or heap, but there are a couple of reasons why SQL Server may choose to do a Scan instead of a Seek. e.g.:

  • Queries where the estimated number of rows to be read exceed the threshold from which SQL Server thinks it’s more efficient to do a Scan instead of a Seek.
  • There is no index to support a Seek operation
  • A non-Sargable predicate forces SQL Server to perform a Scan. This behavior was discussed in a previous post.

Scans are not always evil nor Seeks are always good. In the next sections we’ll briefly overview different types of scans as well as which properties can help us to have a better understanding of them. We’ll be focusing on rowstore format and it’s important to emphasize that Columnstore and Memory-Optimized Indexes will not behave entirely the same way. If you want to know more about them, check it here.

Index Scan

As already stated, this operator reads data from a non-clustered index. The general algorithm it’s quite simple:

  • First row (or batch if we’re on Batch Mode) is read.
  • If there’s a predicate, it is evaluated against the row(s) and matches are returned.
  • Next row / batch gets evaluated and the cycle repeats itself until there are no rows left.

These rows can be read in three distinct ways and this access method is chosen at execution time:

  • Allocation Order Scan: Reads data in an unspecified order. This method is faster for large tables but has a higher startup cost.
  • Ordered Forward Scan: Reads data according to the order specified on the index columns.
  • Ordered Backward Scan: Reads data in the inverse order specified on the index columns.

The predicate mentioned above is one of the properties that is useful to analyze when reading execution plans, as it represents the filter that was applied to the rows. If no predicate is specified, every row is returned. You can also have predicates that don’t filter any rows. I don’t know why, but you can.

The following image shows a query execution that led to a non-Clustered Index Scan. Below the execution plan some of the operator’s properties are visible.

The property that evidences which of the previously three presented access methods was chosen is Ordered. In this case, this property holds the value “False” which means that an Allocation Order Scan was done, because no specific order was implied. When this value is “True”, another property named Scan Direction will specify the direction of the ordered scan.

The last highlighted property, TableCardinality, shows the total number of rows in the index when the plan was compiled.

Estimates Vs Actuals

Although these aren’t specific to Index Scans, they are important to inspect when troubleshooting execution plans.

The above image shows some of these properties and their respective values. Knowing what each means is the key to have a clear picture of what’s going on.

Estimated Operator Cost: Total operator cost estimation, usually and as in this case, it’s equal to Estimated CPU Cost + Estimated I/O Cost. Another interesting fact is that, in this case, it’s also equal to Estimated Subtree Cost, simply because there are no other operators on the subtree.

Estimated Number of Executions: Estimated number of executions for the operator, in the current plan. Operators in the inner input of Nested Loops may cause values greater than 1, which is not the case.

Estimated Number of Rows per Execution (Formerly Known As “Estimated Number of Rows”): Number of rows returned per execution. Multiplying this number to Estimated Number of Executions will provide the estimated total number of rows.

Estimated Number of Rows to be Read:  Estimated number of rows to be read by the operator.

Number of Rows Read: Number of rows read by the operator during the index scan, which typically equals to the total number of rows in the index.  The difference between this property and the Actual Number of Rows property represents the number of rows that was read but not returned due to the Predicate property.

Actual I/O Statistics: I/O stats at the operator level, translating rows into more specific and measurable values. For example, the 357 Logical Reads presented, evidences that 357 8KB pages were read in this execution.

Actual Number of Rows for All Executions (FKA “Actual Number of Rows”): Total number of rows returned, across all executions

Actual Time Statistics: Actual time spent by this operator, divided into “Actual Elapsed CPU Time” and “Actual Elapsed Time

But There’s Life Beyond This

In addition to the list presented above, there are other properties that also might be helpful:

  • Output List: Shows the list of all columns regarding the rows returned by the operator.
  • Forced Index: Its value will be “True” if an index is explicitly hinted in the query text, otherwise will be “False”.
  • ForceScan: Its value will be true if FORCESCAN is explicitly hinted in the query text, otherwise will be “False”.
  • ForceSeek: Its value will be true if FORCESEEK is explicitly hinted in the query text, even though the optimizer ended up choosing to perform a scan. Otherwise, will be “False”.

Be aware that forcing Indexes, Scans or Seeks can lead to unwanted outcomes. Sure, they can be lifesavers and are great tools when applying quick fixes, but we’re fundamentally force SQL Server to choose a route, that over time may not be the best one.

Clustered Index Scan

Not surprisingly, in a Clustered Index Scan data is read from a clustered index. Its behavior is basically the same as in a non-Clustered Index Scan and all previous properties can be seen and used under the same premises.

Table Scan

Table Scan occurs when a scan is done over a Heap – a table without a clustered index. Its behavior is also similar to the previous scans but due to the different underlying structure, there are a couple of aspects to consider. Instead of having a B-Tree structure, with root, intermediate and leaf nodes, it just has unordered data pages that are not linked, hence sequential access needs to go through the Index Allocation Map (IAM) pages.

Heap structure (left) vs Index structure (right) Source

If records are updated on Heaps, in a way that they can’t fit on the current page no more, the row is moved to a new page, where is marked as “forwarded” and a “forwarding-pointer” to this new location is stored in the “old” page. Table Scans read forwarded rows when the scan encounters the original location, hence skipping them on their current location.

Heaps are suitable under specific conditions, because the absence of a clustered index means no overhead on index maintenance and less overhead on storage. Nonetheless, searching for specific data can be a pain in the buttocks, unless the Heap is not that large, or non-clustered indexes are created. The bottom line here is that you need to be careful, even though Heaps Don’t Lie. If you didn’t get this Shakira reference you are a lucky person. If you did, I’m sorry.

More Threads! Call the Proletariat

When the plan and specifically the Scan operator goes parallel, things get more interesting. Threads are listened for row requests and data gets returned to each thread that requested data and had already processed all data from last page received. We can also see more information of data distribution across threads in the operator properties.

Analyzing these numbers will help us to troubleshoot potential skewed distributions.

Parallelism in Table Scans is a bit different due to the specificities already discussed and, in theory, they are more prone to race conditions. See an awesome explanation from Hugo Kornelis here.

And that’s it! Thank you for your time and stay safe🖖

Aventura-profissional PNG from pngtree.com

The Execution Plan Saga #1: Getting Started

When we need to troubleshoot a query that is having performance issues, there are two possible routes: one is to empirically modify the query, pushing buttons and knobs until, eventually, things get better. The other is to analyze the execution plan and take specific, focused actions, based on the identified problematic areas. Sometimes both approaches can lead to similar outcomes and the first can even be faster, especially when we’re not used to analyze execution plans. Nevertheless, the latter is a much more consistent approach, simply because there will be times when blindly applying modifications won’t fix the problem and you’ll need to figure out the root cause, in order to solve it.

An execution plan is a blueprint, a set of algorithms and properties that SQL Server processes in order to retrieve the requested data. If we are able to understand these algorithms and their inherent properties, putting them into context, we will be incredibly more efficient solving problems.

Elements of an Execution Plan

As already mentioned, an execution plan comprehends operators, with their respective properties and data flows. Operators represent specific algorithms chosen by SQL Server to perform the logical operations that materialize the specified query. Hence, they serve multiple purposes:

  • Read Data
  • Combine Data
  • Group / Sort Data
  • Manipulate Data
  • Modify Data
  • Performance-Related

On the other hand, arrows represent the data flow between operators. But just these operators and arrows alone tell us little, we must analyze properties to have a proper understanding of what’s happening. Each operator has its own specificities, so, inevitably, they will hold distinct properties, that are relevant in the context of that specific operation.

Despite that, we can make some conclusions just by looking at the execution plan e.g. the width of data flow arrows, the cost of each operator, or the time spent on each operation. 

It’s important to understand that we can look to the plan in distinct moments:

  • Before Execution: Estimated Execution Plan
  • After / During Execution: Actual Execution Plan

The names may suggest that two separate plans are created, but that’s not the case. It’s actually the same plan, the only thing that differs is the amount of information available.

In the Estimated Execution Plan, we can see, for example, how many rows SQL Server estimates that a particular operator will return, but as this is prior to execution, we obviously won’t be able to see the actual number of rows returned.

To compare the estimated vs actual row count, we’ll need to look into the Actual Execution Plan. Since it comprehends runtime statistics, it’s evidently preferable to use it rather than the estimated plan to troubleshoot problems, however, in those dark moments when for some reason we’re unable to get the Actual Execution Plan, having the Estimate Execution Plan is much better than nothing.

Troubleshooting Guidelines

Examining an execution plan can be overwhelming if we don’t know what to look for. The following guidelines are based on a set of sequential steps proposed by Grant Fritchey on his book “SQL Server Execution Plans”

Warnings

When a yellow warning sign or a red exclamation mark arises attached to an operator, it means SQL Server is telling us that there is something we should check. Despite the fact that sometimes the behavior related to the warning message is not the root cause of the problem we’re trying to solve, often it is and it’s just there at plain sight – easy buck.

To view the warning message, you can hover the operator’s icon or open its properties.

First Operator

The first operator (typically SELECT, UPDATE, INSERT or DELETE) holds very relevant information about the entire plan. The following list include some properties that will frequently be helpful to us:

  • QueryTimeStats: Values for CPU time and elapsed time (in milliseconds).
  • MemoryGrantInfo: Information about query memory requirements. More details about each property can be viewed in a previous post.
  • WaitStats: Top wait stats on the query execution, including the number of waits (WaitCount) and the total wait time (WaitTimeMs).
  • Parameter List: All parameters, including data type, value at compile time (if the parameter was sniffed), and value at runtime (in an actual execution plan). These values will help us to understand how parameter sniffing impacted the query.
  • QueryHash: Hash value resultant from the query text. It is used by SQL Server to find plans for similar or equal queries.
  • QueryPlanHash: Hash value resultant from the execution plan. It is used by SQL Server to find plans for similar or equal execution plans.
  • Set Options: Values of specific SET options at compilation time. It’s important to look at these options because different value combinations generate different plans. Sometimes these variations are innocuous and almost invisible, but other times they might shape a completely different plan. For example, ARITHABORT is one of those options that can trigger very distinct behaviors on the execution plan, depending on its value.
  • Type of optimization: Can either be “TRIVIAL” or “FULL”. A “TRIVIAL” optimization means that there were no meaningful execution alternatives for the optimizer to evaluate. It is important to stress that this not related to workload volume. If we execute SELECT * FROM HugeTable, the query can run for minutes, still the optimization level will show “TRIVIAL”, just because SQL Server hadn’t any other choice but to do a full table/index scan. In a “FULL” optimization SQL Server will evaluate the maximum alternatives possible, given the time and resources limitations. This optimization level also unlocks other properties, e.g.:
    • Reason For Early Termination Of Statement Optimization: Specifies why the optimizer gave up on optimizing the plan. The possible values are:
      • Good Enough Plan Found: The Optimizer found a plan with a cost below the threshold.
      • Time Out: Compilation time exceeded the maximum optimization time, defined at the start of the optimization phase.
      • Memory Limit Exceeded: Compilation stopped due to memory shortage.

Operator Cost

Commonly, we’ll see expensive operators standing out from the rest. When the cost is mainly in one or a couple operators, if we manage to improve those specific areas, it will have a substantial impact on the whole execution. Just remember that these costs provide guidelines and aren’t always accurate, some operators even have a fixed cost, which can be very misleading. Another visual hint that is quite useful in the Actual Execution Plan is the time displayed below each operator. This value can be cumulative (including time spent on child operators) or individual when dealing with Batch Mode operators.

Data Flow

Still regarding these visual hints, the thickness of data flow arrows is one of the coolest. Their width is proportional to the row count (based on actual row count when available, or on estimated row count otherwise).

Additionally, we must analyze abrupt transitions, i.e. where wide arrows become narrow, because that may indicate that the filtering is occurring too late. We want to “push” filters as far to the right side of the plan as possible.

Estimates Vs Actuals

As already covered in a previous post, statistics are absolutely vital in the process of creating an execution plan. Estimates underpin some algorithm choices and consequently, bad estimates can lead to bad choices. A particular algorithm might be efficient when dealing with few rows, but extremely inefficient when dealing with a huge number of rows. Nested Loops is a great example of that. That’s why it is so important to keep an eye on estimated vs actual row counts. If a large gap between estimated and actual row count led to a catastrophic algorithm choice, we’ll need to figure out how to improve those estimates or how to hint SQL Server to choose differently.    

Read Operators

Analyzing read operations is an important part of troubleshooting execution plans. Althought there is no such thing as an evil operator and we can’t state that a scan is a bad thing or a seek is always good, patterns like a scan that returns few rows or a seek that returns a huge amount of rows might be worth checking.

Posteriorly, we’ll analyze read data operators in greater detail.🖖

Aventura-profissional PNG from pngtree.com

Performance Woes & Wows #2: LIKE ‘%Make This Query Go Slower Please%’

Sargability is a funny word that’s used to characterize the ability of the engine to take advantage of and index, in order to speed up the execution of a query. The legend says that it is derived from a contraction of Search ARGument ABLE.

Usually, when some writes a query, the goal it’s to retrieve the desired information, as fast as possible. What’s interesting is that “as fast as possible” can either refer to “execution time” or “time I will spend writing this query” and we all know which one is, most of the times. As a rule of thumb, queries must be written in the simplest way possible, being the underlying reason that if it’s easy for you to read, it will also be easier for the Optimizer to read and process it.

But that’s not always true, for example, we use functions because they save us effort and improve query readability, but typically they don’t make Optimizer’s life easier. Au contraire, because they tend to make predicates non-sargable. On other occasions, bighearted developers decide to allow the customer to search on every damn existing column, using LIKE with leading and trailing % wildcards and that’s also a sargable capital sin, as we’re going to look further.

Non-Sargable Predicates

So, let’s talk a bit more about bad ideas. Not those that start with “Hold my beer and watch this”, but instead “It runs fast on my machine, let’s take it to Production”.

Non-sargable predicates have a negative impact on query performance, e.g. by causing poor cardinality estimates or inability to perform index seeks, which consequently will lead to inappropriate plan choices and increased CPU consumption.

The following list include some of the most common patterns that kill predicate sargability. We’ll very briefly overview them and posteriorly focus on one specific case.

ColumnX = Function(ColumnX): As already pointed out, functions can be naughty. One might be tempted to use functions because they encapsulate logic in a very practical way to us humans, but this often implies performance loss. The query presented below, where we extract the year from a date column, comparing it against a specific value, represents one of those cases. SQL Server won’t be able to use any index efficiently, even if we have an index on that field. We’ll shortly discuss why this happens.

SELECT COUNT(1) FROM Matches WHERE YEAR(Date) = 1999

ColumnX = ColumnY + ColumnZ: A similar behaviour is observed when we use the result of arithmetic operations between columns to retrieve results of a particular column. It’s important to stress that a predicate containing an arithmetic operation between a column and a constant literal value would also be non-sargable.

SELECT * 
FROM Matches 
WHERE FullTimeGoals = HalfTimeHomeGoals + HalfTimeAwayGoals

@Parameter1 IS NULL OR ColumnX = @Parameter1: The heart of that single query where search dreams come true and someone got really proud of it. It’s possible to search for everything. Slowly. In runtime SQL Server won’t be able to figure this out, hence the non-sargability. There are a couple of ways to get around this, being one of them to add OPTION(RECOMPILE). This hint enables Parameter Embedding Optimization, which gives the Parser the ability to replace query parameters with literal constant values during query parsing and enables further complex simplifications, like removing contradictions. Nevertheless, as this hint forces the recompilation of a statement, it introduces some overhead that needs to be taken into account.

CREATE OR ALTER PROCEDURE SearchNonFriendlyMatch 
(
    @Hometeam INT = NULL,
    @Awayteam INT = NULL,
    @Date DATE    = NULL,
    @FTG TINYINT  = NULL
)AS
BEGIN
	SELECT 
		HomeTeam,AwayTeam,Date, FullTimeGoals
	FROM 
		Matches
	WHERE
		League IS NOT NULL AND
		(@HomeTeam IS NULL OR HomeTeam = @HomeTeam) OR 
		(@AwayTeam IS NULL OR HomeTeam = @HomeTeam) OR 
		(@Date IS NULL OR [Date] = @Date) OR 
		(@FTG IS NULL OR FullTimeGoals >= @FTG)
END

ColumnX LIKE ‘%Make This Query Go Slower Please%’

When we have this wildcard prefixing and suffixing a string, we’re telling SQL Server to retrieve all records that contain the specified text, regardless of what comes before or after it.

SELECT  TOP 5 * FROM Teams WHERE Name LIKE '%ar%'

The execution of the previous query returns these results:

IdCountryName
7PortugalSC Beira-Mar
8EnglandCardiff City FC
23EnglandArsenal FC
28SpainFC Barcelona
68ItalyParma

But why is this bad for performance?

Let’s put ourselves in SQL Server shoes and pretend that we’ve received a list containing names of football teams, ordered alphabetically. We need to count how many of these names contain the letters “ar”. We roll up our sleeves and start analyzing all characters from the first record, then the second, then the third, and so on, until we reach the last item in the list. We had no alternative but to scan through all values, simply because the order of this list wasn’t helpful at all. Recalling the table above it’s easy to understand why: the substring can be everywhere, there is no shortcut, no possibility to discard any subset of these names. SQL Server will have the same exact struggle scanning through all the index represented by our list.

Now, let’s execute a similar query:

SELECT  COUNT(1) FROM Teams WHERE t.Name LIKE '%Benf%'

And look into its execution plan:

This is a pretty small table but it’s enough to prove the point. This query ran in about 600 miliseconds and IO statistics show 1 Scan Count and 1226 logical reads. The cost is almost entirely in the Index Scan, that was chosen even thought SQL Server estimated just 9 rows to be retrieved. Well, that wasn’t really a choice, in this case an Index Scan was made because the non-sargable search prevented SQL Server from using an Index Seek, as we already know.

With that being said, let’s remove the leading wildcard and execute the query:

SELECT  COUNT(1) FROM Teams t WHERE t.Name LIKE 'Benf%'

And this time, it is possible to confirm by the execution plan that the query was much more efficient using the Index Seek.

It has executed in virtually no time and IO statistics show 1 Scan Count and 3 logical reads.

If we return to our ordered list, it’s very easy to understand why the query executed so much faster: Now we can benefit from that alphabetical order, going directly to the letter B, looking through a few lines and quickly return the row that matches the search.

What Should I Do, Then?

We saw that the sargable version of our query ran in virtually no time, while the non-sargable version took about 600ms and even though proportionally this gap is huge, users don’t complain about half a second queries. Nevertheless, the real problem with this type of queries is they won’t scale. When you try to run this query against a table with millions of rows instead of a couple thousand, it will be much more painful. You’ve been warned.

But what can we do to avoid it?

The first and obvious option would be to analyze if that leading wildcard is really needed.
If you can simply get rid of it, problem solved! If not, there’s still hope…

One way to improve these searches is by using Full Text Search (FTS). FTS is able to quickly find all matches for a term search in a table, because it doesn’t have to scan rows. Instead it takes advantage of text indexes that store positional information for all terms found in the columns where we create the text index on. Instead of using LIKE operator, FTS enables the use of CONTAINS/CONTAINSTABLE or FREETEXT/FREETEXTTABLE, as exemplified below:

SELECT COUNT(1) FROM Teams t WHERE CONTAINS((t.Name), @Name) 

FTS has a few limitations and requires some setup effort, but I often work with it and it can be pretty awesome.

Additionally, there are newer, hotter technologies designed to solve these issues. For example ElasticSearch, that allows fast text searches in a very scalable way. It also allows pretty complex searches but, to me, the nicest advantage comparing to Full-Text Search is the possibility to take the heavy processing completely outside SQL Server. CPU cores in SQL Server can be very expensive and should be treated with respect, my boss says.

Now you now why you should avoid queries LIKE ‘%these%’ and you have some ideias to improve them. Hope this was helpful to you 🖖