Should I have multiple TempDB Data Files?

This is a question I have often asked myself after reading an article about TempDB configuration. Most of the time I see only one TempDB data file on a SQL Server instance (that’s the default when SQL Server is installed), but I see lots of suggestions/best practices on the internet for multiple; one data file per processor core is the most common suggestion. My advice is to leave it alone unless you have a performance problem with TempDB. This post applies to SQL Server 2008 and up (2014 for me currently) and tries to answer the question by summarizing what I have read.

This post is a summary of several sources that I want to give credit to up-front; please read these if you want to know more about this subject. Google “Add multiple files to TempDB” if you want to read lots more…

OK, let’s get down to business. First, do I even have a problem with TempDB to worry about? Run this script to check on your largest waits on your instance (since the last time your instance was re-started):

-- from Inside Microsoft SQL Server 2008: T_SQL Querying
-- chapter 4
-- lists largest wait events on the server
-- use this to look for i/o or cpu problems...
(SELECT wait_type, wait_time_ms / 1000. AS wait_time_s,
	100 * wait_time_ms / SUM(wait_time_ms) OVER() AS pct,
	ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS rn,
	100 * signal_wait_time_ms / wait_time_ms as signal_pct
FROM sys.dm_os_wait_stats
WHERE wait_time_ms > 0
AND wait_type NOT LIKE N'%SLEEP%'
AND wait_type NOT LIKE N'%IDLE%'
AND wait_type NOT LIKE N'%QUEUE%'
	/* filter out additional irrelevant waits */ )

SELECT W1.wait_type,
	CAST(W1.wait_time_s AS NUMERIC(12, 2)) AS wait_time_s,
	CAST(W1.pct AS NUMERIC(5, 2)) AS pct,
	CAST(SUM(W2.pct) AS NUMERIC(5, 2)) AS running_pct,
	CAST(W1.signal_pct AS NUMERIC(5, 2)) AS signal_pct

FROM Waits AS W1
JOIN Waits AS W2 ON W2.rn <= W1.rn
GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct, W1.signal_pct
HAVING SUM(W2.pct) - W1.pct < 80 -- percentage threshold
	OR W1.rn <= 5


If you have a potential TempDB problem then this query will return PAGELATCH – type waits and/or PAGEIOLATCH waits. If you don’t see page latch waits in your results then STOP HERE.

Second, are these waits from TempDB? I use this script (from Davis) to check. This query returns currently waiting tasks, so you may have to run this many times at different times of the day to capture the TempDB waits:

-- from Robert Davis’ blog post
-- Breaking down TempDB Contention
-- notes: session_id < 50 are system tasks
-- resource_description holds different values for different wait types;
--	This query uses functions to parse it
--	that are specific to the page latch wait types.

SELECT session_id,
	ResourceType = CASE WHEN CAST(Right(resource_description,
	   LEN(resource_description) - CHARINDEX(':', resource_description, 3))
	   AS INT) - 1 % 8088 = 0 THEN 'Is PFS Page'
	WHEN CAST(RIGHT(resource_description,
	   LEN(resource_description) - CHARINDEX(':', resource_description, 3))
	   AS INT) - 2 % 511232 = 0 THEN 'Is GAM Page'
	WHEN CAST(RIGHT(resource_description,
	   LEN(resource_description) - CHARINDEX(':', resource_description, 3))
	   AS INT) - 3 % 511232 = 0 THEN 'Is SGAM Page'
	ELSE 'Is Not PFS, GAM, or SGAM page' END

FROM sys.dm_os_waiting_tasks
AND resource_description LIKE '2:%';

If there are PAGELATCH – type waits and they are on TempDB then this query will show that. If you have this kind of blocking, then multiple data files for TempDB may be the answer (disclaimer: your results may vary here! See Paul Randal’s myth #12 for a deeper discussion on this).

How many data files? One per CPU core is the suggestion, but I think that’s way too many. Paul Randal suggests ¼ to ½ per number of cores, and I think that even that may be too many for today’s servers. If you have many cores (like 64 or more!) then I’d start with a small number like maybe 8 files; try it out and check your waits again after a period of time. As Paul notes, too many TempDB data files can actually cause performance issues if your workload produces a lot of sorting in TempDB. Writing out the data using the round-robin allocation that SQL Server uses may just cause you more problems, so I’d keep the number of files small. Here’s one of the latest suggestions that I could find from Microsoft on this topic:

“As a general rule, if the number of logical processors is less than or equal to 8, use the same number of data files as logical processors. If the number of logical processors is greater than 8, use 8 data files and then if contention continues, increase the number of data files by multiples of 4 (up to the number of logical processors) until the contention is reduced to acceptable levels or make changes to the workload/code.”

If you need to add data files, should they be placed on different drives? That’s a good question. There is no magic answer to this. I would add them on the same drive first; I’d only split to different drives if I still have issues.

Other semi-random notes that apply:

  • Don’t shrink TempDB; TempDB is re-built (to its original size) each time SQL Server is re-started.
  • Set the initial size of TempDB to the largest size that you think you will need (no there is no magic formula for the size; I have used 20% of largest database or the size of largest table; using version store is also a big factor).
  • Add files of equal size (for better round-robin allocation).

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s