SQL Server Best Practices
What are they and why are they important?

Source Of This List

This list is taken from the report generated by SQL CoPilot. If you have any suggestions for additional best practices please let us know and we'll add them here and to SQL CoPilot itself.

This is a list of 50 SQL Server best practices along with their importance and a brief explanation of each.

Security Best Practices

HIGH
Rename and/or disable the sa login

The sa login is installed by default on all versions of SQL Server and has sysadmin privileges.

This login is well-known and often the first target by attackers. It is best practice to rename or disable sa.

To rename sa:

ALTER LOGIN sa NAME = 'new name';

To disable sa:

ALTER LOGIN sa DISABLE;
HIGH
Do not run the SQL service account under Local System

Local System, named NT AUTHORITY\System, used to be the default service account up to SQL Server 2005 and is still available as an option for backwards compatibility.

Do not use this account. It is a member of Local Administrators on the computer and would allow a user to perform a lot of damage if they were able to run a Windows command under the context of the service account (for example via xp_cmdshell).

A number of other options are now available, including virtual service accounts, managed service accounts, local users that are NOT members of Local Administrators, and domain users with minimal privileges.

HIGH
Check that BUILTIN\Administrators has been dropped

Earlier versions of SQL Server automatically created the BUILTIN\Administrators group as a login and added it to the sysadmins server role.

This is clearly not a sensible thing to do as this group includes everyone who is a local administrator on the server.

To drop the BUILTIN\Administrators login:

DROP LOGIN [BUILTIN\Administrators];
MEDIUM
Audit failed logins

If you don't know that attempts have been made to login to your instances you are operating in the dark. At the very least, failed logins should be audited. It is also good practice to audit successful logins, although some applications may generate thousands of entries in the error log, so SQL CoPilot checks for failed login auditing only.

To enable auditing, use the Server Properties dialog in SSMS.

MEDIUM
Use Windows Authentication mode

SQL Server allows 2 methods of logging on. Windows authentication uses the Windows account of the user or service and is the most secure, as it can utilize Kerberos.

SQL authentication relies on the user specifying a login and password. The password is passed over the network as clear text, and if used by an application it has to be stored somewhere.

SQL authentication can be disabled by setting SQL Server to accept Windows authentication only, as opposed to Mixed Mode authentication where both are allowed.

To change this setting, use the Server Properties dialog in SSMS.

MEDIUM
SQL logins must enforce password policy

By ensuring that the password for a SQL login adheres to Windows complexity requirements you are significantly enhancing security.

Unfortunately, the fact that the setting is enabled does not guarantee that complex passwords are being used as the setting could be enforced after a password was specified.

If you are at all unsure that complex passwords have been used, set the "Check Password Policy" option against all SQL logins and then force a password change on next login.

MEDIUM
Limit permissions for the public server role

By default all logins are members of the public server role. This can connect to a few endpoints and has VIEW ANY DATABASE permission.

These permissions are necessary (revoking VIEW ANY DATABASE causes issue with SSMS), but you should not grant it any additional permissions.

If you want a large group of logins to have the same permissions consider using Windows security groups instead.

MEDIUM
Don't enable cross database ownership chaining at the server level

Cross database ownership chaining is an extension of ownership chaining that allows it to cross database boundaries. An ownership chain is valid provided either that a user in both databases maps to the same login, or the guest user has not been disabled in the second database in the chain.

It is safer to disable cross database ownership chaining as a server-wide setting (this is the default) and to enable it only on the databases that need it.

To disable the server-wide setting:

EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
EXEC sp_configure 'cross db ownership chaining', 0;
RECONFIGURE;

To enable cross database ownership chaining on a database (both databases in a chain need it enabled):

ALTER DATABASE databasename SET DB_CHAINING ON;

MEDIUM
Disable the guest user in all non-system databases

The guest user exists in all databases and allows any login without a specific user in the database to connect to it.

SQL Server best practice is to disable the guest user in all databases unless specifically required. It cannot be disabled in master and tempdb, and must not be disabled in msdb or you get errors using Management Studio.

To disable the guest user in a database:

REVOKE CONNECT FROM guest;
MEDIUM
Create asymmetric keys with 1024-bit RSA encryption or higher

When creating an asymmetric key, you specify the algorithm used by the encryption. The current options are RSA_4096, RSA_3072, RSA_2048, RSA_1024 and RSA_512.

512 and 1024 have been deprecated in the latest version of SQL Server and can only be used if database compatibility level is 120 or lower, but 1024 is still considered very secure.

SQL best practice is to use the highest level available.

MEDIUM
Do not create symmetric keys with RC4 or RC4_128 algorithms

Use of RC4 or RC4_128 algorithms is deprecated and can only be used if compatibility level is 90 or 100. Use one of the AES algorithms instead.

LOW
Be aware of any stored procedures set to run automatically on startup

There are plenty of legitimate reasons for wanting to run a stored procedure when the SQL service starts.

It may be an application requirement. It may be something you have set up as part of your monitoring. But equally it could be something more sinister put there by person or persons unknown!

Be aware of any startup procs and make sure you understand why they are there.

To disable a startup procedure:

EXEC sp_procoption @ProcName = 'procedurename', @OptionName = 'startup', @OptionValue = 'off';
LOW
Be aware of any SQL Agent jobs scheduled to run automatically on startup

There are plenty of legitimate reasons for wanting to run a SQL Agent job when the SQL service starts, but it is important that the DBA is fully aware that this occurs and understands the reasons behind it.

LOW
Manage privileged logins via Windows security groups

It is very easy when, for example, a new DBA joins your company to create a new login and grant them the permissions they need for the job. But imagine if that is repeated across hundreds or thousands of servers. Imagine the problem of revoking that access when they leave, or move elsewhere within the company.

This becomes even more important if the job requires one of the privileged levels of access, such as CONTROL SERVER permissions or membership of the sysadmin or securityadmin groups.

Much easier is to create one or more Windows security groups and grant these groups the access they need. Now as employees come and go their permissions are being managed via Active Directory Users and Computers - a single place where they can be added and removed from the security groups.

LOW
Endpoints should be owned by sa, not by a user

When creating a new endpoint, for example for AlwaysOn, the owner by default is the login who created it. Best practice is to explicitly specify sa at creation time.

Endpoint owners cannot be changed after the endpoint is created. The endpoint must be dropped and recreated.

LOW
Do not enable xp_cmdshell unless specifically required

xp_cmdshell is an extended stored procedure in the master database that allows operating system commands to be run under the context of the service account.

The existence of this command is one of the reasons why the service account should have minimal privileges.

The reason this is listed here as having low importance is that any sysadmin could in theory enable it, run xp_cmdshell and then disable it again, so having it disabled is no guarantee it will not be used.

To disable xp_cmdshell:

EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
EXEC sp_configure 'xp_cmdshell', 1;
RECONFIGURE;
LOW
Only set database TRUSTWORTHY option when required

Setting the TRUSTWORTHY option allows a database to access resources beyond the scope of the database and to access assemblies with EXTERNAL_ACCESS and UNSAFE permissions.

For most databases this is unnecessary, and there is no need for the property to be set. It is always set on the msdb database and should not be changed.

To set TRUSTWORTHY off:

ALTER DATABASE databasename SET TRUSTWORTHY OFF;

Performance Best Practices

HIGH
Enable instant initialization for the service account

When a file is created by SQL Server, such as when restoring or creating a database, or when a file autogrows or is grown manually, it has to zero-fill (initialize). This can take a while for large files. Fortunately Windows 2003 onwards has a feature to speed this up.

By granting the Perform Volume Maintenance right to the service account, instant initialization is enabled. Windows returns immediately and "pretends" that the file has been initialized.

This works for data files only - log files do not benefit - but nevertheless it is a big performance boost, and should be enabled at all but the most security conscious companies. The reason I mention security is the very reason it is not enabled by default when SQL Server is installed. Because the files are not actually initialized, the original data on the disk is still present - at least until SQL Server has filled the file with real data. For the vast majority of companies this is not an issue, and the performance benefits are too good to ignore.

For SQL Server 2016 SP1 and later, SQL CoPilot is able to query sys.dm_server_services to find out if instant initialization is enabled.

HIGH
Set "max server memory" to a sensible value. Do not leave it at the default

You have lots of lovely memory in your servers and SQL Server LOVES memory. The trouble is, so does the operating system and so do any applications running on there.

OK, so that's another best practice - try to keep your SQL Servers dedicated and don't allow other applications on there - but that's not what this recommendation is about.

To allow SQL Server to use as much memory as possible without starving the OS (and the OS may perform a hard memory trim, and/or send a low memory notification to SQL Server, causing a never ending grow/shrink/grow cycle) you need to set max server memory to a sensible value. A good value for max server memory is somewhere around 80% - 90% of server memory, but this will depend on how much server memory there is and what else is running.

The SQL Server Best Practices Report highlights any instance where max server memory is outside the range 78% - 92% of server memory, although that doesn't take into account multiple instances being installed.

Note that there is no entry in this list for "lock pages in memory". This is because Microsoft stopped recommending it due to improvements in memory management in Windows 2008+. However there is debate about whether that setting might still be useful.

HIGH
All schedulers should be online on dedicated SQL Servers

If some are offline, this may be intentional due to multiple instances being installed or other applications running on the server.

Ideally you will be using a dedicated server for each instance and so you may be throttling performance by disabling schedulers.

Another reason could be licensing. For example you are using a server license on a server with more than 20 cores (2012+). That is a different problem entirely and you may not be sufficiently licensed for the hardware. Check the SQL Server Licensing Guides for more information.

HIGH
Do not use the AUTO_SHRINK database option

The AUTO_SHRINK database option may be useful in a development environment but it should never be used in production, especially if database files are set to autogrow as well.

The constant grow/shrink/grow cycle that results can cause significant physical fragmentation that will affect performance.

The increased I/O during the shrink will also affect performance. Leave this setting switched off.

To turn off AUTO_SHRINK:

ALTER DATABASE databasename SET AUTO_SHRINK OFF;
HIGH
Do not use the AUTO_CLOSE database option

This is probably the only option with a bigger impact than AUTO_SHRINK.

Setting AUTO_CLOSE on causes the database to be shutdown whenever there are no connections. When a new connection is made, it has to start up again. This is not a totally free operation in terms of performance, and also adds unnecessary entries to the error log (although that can be a useful warning sign that it is enabled).

Worse is that when the database is shutdown, all data is cleared from the buffer pool, and that has to be read from disk again by the new connections before it can be accessed. There is no doubt that turning this option on can cripple the performance of any application using the database.

To turn off AUTO_CLOSE:

ALTER DATABASE databasename SET AUTO_CLOSE OFF;
MEDIUM
The "min server memory" setting should NOT be set to the same as "max server memory"

The SQL Server OS needs the flexibility to respond to low memory notifications from the OS, and setting "min server memory" too high can prevent this.

MEDIUM
Try to keep to Microsoft recommended values for "max degree of parallelism"

Control over parallelism may be necessary depending on the workload and limiting the max degree of parallelism may be necessary on some servers. Current recommendations from Microsoft below:

NUMA Architecture Logical Processors MAXDOP setting
Single NUMA nodeLess than 8At or below # or logical processors
Single NUMA node8 or more8
Multiple NUMA nodesLess than 8At or below # or logical processors per NUMA node
Multiple NUMA nodes8 or more8

This report highlights "max degree of parallelism" if greater than 8, or equal to 1 (which is rarely the correct value, especially as it can affect performance of maintenance tasks).

MEDIUM
"cost threshold for parallelism" defaults to 5, which is far too low. Consider increasing it

"Cost threshold for parallelism" is a setting that helps the engine decide whether or not to use parallelism. It defaults to 5 and has done for years. The value was based on the performance of the ancient hardware used at the time. But servers are now so much more powerful that a significantly larger value is more appropriate. This can improve performance by eliminating some unnecessary parallelism. Jonathan Kehayias has blogged a script that will help you to find the optimal value.

To change the setting:

EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
EXEC sp_configure 'cost threshold for parallelism', 20;
RECONFIGURE;
MEDIUM
"optimize for ad hoc workloads" should nearly always be enabled, with one small proviso

See optimize for ad hoc workloads for more information.

MEDIUM
The "max worker threads" setting should be set to 0

Keeping the default of 0 allows SQL Server to automatically configure a value on startup. This value depends on the number of processors.

There may be situations where performance can be improved by setting this to a specific value, but this is rare and should only be done by an expert.

If you set the value too low you can cause thread starvation, and setting it too high wastes available memory address space.

MEDIUM
Be aware of the number of tempdb data files and the reason for the choice

The default of 1 will probably be fine for smaller databases or those with low tempdb usage.

High usage can cause contention on the SGAM and PFS allocation pages, in which case try increasing the number of files until the contention has gone.

Too many tempdb files can also affect performance, due to the additional overhead of allocating pages in multiple files. Ignore advice on the internet to create one file per core!

The SQL CoPilot Best Practices Report highlights instances with just 1 tempdb data file (when there are 4 or more schedulers) and instances where the number of files is more than half the number of schedulers. That doesn't mean anything is wrong, but it highlights the need to take a closer look. As with all configuration decisions it is important that you understand why they have been made.

MEDIUM
If you have multiple tempdb data files they should be equally sized and have equal file growth

The reason for multiple tempdb files, as explained above, is to reduce allocation contention. Having a mismatch in file size reduces the effectiveness of this as the largest file will get more than its fair share of allocations.

Ensure that the initial size of each file is the same and that the file growth is also the same.

MEDIUM
Blocked process threshold should be 0 or 5+

The blocked process threshold controls the generation of a blocked process event, which can trigger an alert and be viewed in Profiler or in a server-side trace.

A value of 0 is the default, and this turns of the event generation. Values greater than 0 are the number of seconds of blocking before an event is generated.

Do not set this to 1, 2, 3 or 4. This can cause the deadlock monitor background thread to run continuously, which impacts server performance.

MEDIUM
Clear backup history from msdb with a daily or weekly scheduled job

On a busy server with a lot of databases and regular log backups, the backup history in msdb can quickly take up a lot of space.

Disk is cheap of course but by default system databases, including msdb, are stored on the C drive, and this rarely has much free space. It is also more likely to fill up over time.

Querying and updating msdb can be slow when it gets big, which may affect run times of the backup jobs.

The report highlights any server where msdb has more than 90 days of backup history.

LOW
Remember to run RECONFIGURE after changing a configuration setting via sp_configure

If you have changed a configuration setting via sp_configure and not run the RECONFIGURE command, the change will not be active.

Worse, the next time the service is restarted it will become active and may have an unexpected impact on performance or availability.

Make sure you know which settings have been made, and why. Either run RECONFIGURE or revert the configuration change.

LOW
Use server triggers sparingly and make sure you understand what they do and why they have been created

Server triggers may have been created for a number of reasons. Auditing of logins to a table, prevention of logins from certain hosts or at certain times.

Make sure you are aware of the existence of server triggers and check the code to ensure they execute efficiently.

LOW
Enable default backup compression

Writing compressed backups uses significantly less I/O at the expense of a small increase in CPU. This can make a noticeable difference in the time taken to perform a backup. It also means restores and copies of the backup files across the network are faster.

It is not necessary for the configuration option to be set - it is also an option in the BACKUP command - but setting it is best practice as it ensures ad hoc backups are compressed, and guards against the option being forgotten in the scheduled backups.

LOW
Use extended events instead of SQL Profiler or server-side traces on SQL Server 2012+

SQL Profiler has been deprecated and may not be available in future versions of SQL Server. It is not unknown for it to have a performance impact on the systems it is monitoring. An alternative is to use server side traces but these can also affect performance.

The replacement is Extended Events. These were introduced in SQL Server 2008 and have been developed enough for most purposes since SQL Server 2012. Extended events can capture much more information, especially at a lower level, and in the main are more lightweight than traces. It is still possible to affect performance, especially if there are several extended event sessions running simultaneously, so be careful.

Availability Best Practices

HIGH
Regularly backup production databases, with full and (if necessary) differential backups

Smaller databases can have a full backup daily. Bigger databases may take too long to run daily and can be backed up weekly or monthly, with daily differential backups.

The SQL Server Best Practices Report warns if there has been no full or differential backup for more than 48 hours.

HIGH
Use FULL recovery for production databases and perform regular log backups

Some production data is fairly static, for example a configuration database may almost never be updated, but most databases are regularly updated, and if you ask the users how much data loss is acceptable you will normally get answers varying from "None" to "A few hours"

Using FULL recovery and performing regular log backups allows you to carry out a point-in-time restore of a database. Being able to ask a user exactly when she deleted all rows from the OrderLines table and then confidently restore to just after that time? Priceless!

HIGH
Regularly perform a DBCC CHECKDB against all production databases

One of the responsibilities of a DBA is to guarantee the integrity of the databases. There are a number of best practices that help with this. One of them is to perform a regular DBCC CHECKDB. Having database corruption detected by a user instead of by your regular CHECKDB job may be job threatening!

The best practices report lists all databases that haven't had a successful DBCC CHECKDB for more than 7 days.

HIGH
Do not store database files on the C: drive

By default the SQL Server setup program will try to put system databases on the C: drive. Make sure you place them on a different physical disk (or on a SAN if you use one). The same goes for non-system databases.

By placing files on the C: drive you make it a single point of failure. If corruption occurs you have lost both the operating system AND the SQL Server data files, and this is even more likely if the files are for the system databases. Windows itself tends to use more disk space over time, as updates and patches are installed, so running out of disk space becomes increasingly likely.

In addition to the availability issues, there is also performance to consider. The C: drive often has reduced throughput and higher latency than other dedicated disks and as it is shared with the operating system, there is other activity contending for I/O.

MEDIUM
Enable Remote Dedicated Admin Connection

The Dedicated Admin Connection (DAC) allows administrators to connect to diagnose problems even when the server is not responding to other connections.

This can be done by using Remote Desktop to access the server and then use SSMS if installed, or sqlcmd to connect to the instance.

However, there is also the option of connecting to the DAC remotely within access the server first. This requires an instance level config option to be set "remote admin connections"

To set it:

EXEC sp_configure 'remote admin connections', 1;
RECONFIGURE;

MEDIUM
SQL Agent jobs must not be owned by users

It is very easy when creating a job to forget that the job owner defaults to the person creating it.

This can go unnoticed for years until the person leaves the company and their Windows account is disabled. Then the job suddenly stops working!

Try to get into the habit of setting the job owner to sa each time. Note that this report will highlight all instances with jobs not owned by sa. This will include those where the job has been created by a service account, and you can normally safely ignore these. For example, SSRS subscriptions are scheduled by SQL Agent jobs and these will be owned by the SSRS service account.

MEDIUM
If Transparent Data Encryption is being used, check that the certificates and master keys are backed up

If you need to restore an encrypted database to another server you must either create a certificate from the backup, or restore the service master key from the original server. If you cannot do this the data cannot be retrieved and you are in big trouble!

Make sure that you fully understand the concepts behind Transparent Data Encryption, that you have backed up the relevant certificates and master keys, and that you have rehearsed the procedure for recovering the database on a new server.

MEDIUM
Page verify option should be set to CHECKSUM for all databases

CHECKSUM has been the default for new databases (and hence the model database) for the last few versions of SQL Server but if you have databases that have been upgraded from older versions you may find them using TORN_PAGE_DETECTION or NONE.

CHECKSUM is still no guarantee of avoiding corruption but it significantly reduces the likelihood.

MEDIUM
SQL Agent alerts should be created for high severity errors

You may have monitoring in place for errors and think that creating additional alerts is unnecessary, but there is no harm in a belt-and-braces approach.

Some people recommend alerts for severity 19+, but you may want to focus on 21+, as some errors - for example kerberos authentication errors - are severity 20, and alerting on these might result in too many emails.

For this reason, this report checks for alerts for severity 21+. If you don't have these, you probably don't have the lower severity ones anyway.

LOW
Be aware where a database collation differs from the instance collation

For many applications, it is normal for database collation to be different from the instance collation - SharePoint, for example - but for others it can cause errors. Everything could be fine for a long time and then you start getting collation conflict errors.

This is usually because an execution plan has changed and is now performing operations that join data from a user table with a temp table in tempdb. tempdb gets its collation from the model database, which is set when SQL Server is installed.

If this is a problem for you, you can rebuild the instance (using the installation setup.exe) or if the instance is shared with other databases, move it to a new instance.

LOW
Regularly cycle the error log

The SQL Server error log is regularly queried. By you when investigating issues. By monitoring tools. And by SQL CoPilot in several of its screens.

If the error log gets too big it significantly slows down these queries. Get into the habit of cycling the error log regularly - daily is common.

Create a job that runs at midnight, that calls the sp_cycle_errorlog stored procedure.

LOW
Do not disable the default trace

The default trace captures events relating to server configuration, DDL changes, security etc, and can be very helpful when troubleshooting issues. SQL CoPilot has a number of screens for viewing the default trace.

The default trace is on by default, but can be turned off via the "default trace enabled" configuration option.

To enable the default trace:

EXEC sp_configure 'default trace enabled', 1;
RECONFIGURE;

Have we missed anything?

Do you have any SQL Server best practices that are not listed here? Let us know - if it is something everyone would benefit from we will add it to this list and update SQL CoPilot for the next release.

How To Generate a SQL Server Best Practices Report

In the "All Instances" tab click "Best Practices Report" then click "Generate Report"

A list of all instances is displayed. You can choose to scan all of these for best practices, or by using the filters at the top you can narrow down the instances to check by name, version, edition or tag. You can even restrict the report to just one instance if you wish by typing its name into the box above the instance name column.

Now click the Generate button and sit back while the instances are checked and the report is generated. You can stop it at any time by clicking the Cancel button.

The finished report will be available every time you visit this screen, even if you restart the application, until it is generated again.

Click on a row in the report to be taken to the relevant screen for the affected instance.

If you don't have SQL CoPilot, download the free trial and then return here to generate your report.