Feeds:
Posts
Comments

Archive for the ‘SQL’ Category

SQL Load Test Tool V 1.1

Version 1.1 is available and migrated to GitHub !! Some improvements were made and added the Continuous Execution…

You can access it on the new site: https://github.com/alepelc/EyeOnSql

If you can donate, please do so using Paypal. It will be greatly appreciated !

Advertisements

Read Full Post »

SQL Load Test Tool V 1.0

After a while, finally version 1.0 is available !! Improvements were made on the performance checks and also added an execution duration setting in order to test in a loop for a certain period of time.
Documentation is on its way now…

Read Full Post »

The following is a link to a post I’ve created on SQLServerCentral. The essence of this article is getting the deadlock information from the SQL Server logs in a “readable” way when you have the trace flag 1222 enabled…

http://www.sqlservercentral.com/articles/deadlocks/74829/

Read Full Post »

Today I published the first version of a SQL Load test tool in CodePlex. This is still an Alpha version, so any feedback will be appreciated !

The link is https://github.com/alepelc/EyeOnSql

Read Full Post »

On the previous post I covered some suggestions regarding performance related to the disk subsystem. The second part of performance analysis I’ll cover is related to the indexes and its utilization, as that’s also one of the big performance improvements that can easily be done.

Analyzing the Existing Indexes

One of the things you should always look inside SQL is how your indexes are being used (or unused). The best way to achieve this is using the DMVs the SQL 2005 and above provides. Using those DMCs, you can sneak into which indexes are missing, the usage of the existing ones or which ones are unused.
In order to have better results, it’s recommended that you let the server run for a reasonable amount of time first (restarting the SQL engine clears the content of the DMVs), as these are statistics prepared by SQL server itself. It’s hard to put a value to the “Reasonable Time” as that depends on the SQL usage, but as a rule of thumb you can wait for 3~4 weeks before considering using the information DMVs provides.

The Missing Indexes

Perhaps one of the scripts I use the most, the following code will give an idea of the missing indexes, together with the estimated improvement the index can have if created. Of course, you should take care on how you use this, as if you create an index you may introduce a performance issue elsewhere, so it’s really really suggested that you create the indexes on a test DB first and that a smoke test gets executed against that just to confirm, at least, that not only you are improving things but also not adding future nightmares.
So, the script I use if the following:

SELECT
db_name(database_id) as ‘DBName’
,gs.unique_compiles
,gs.user_seeks
,gs.last_user_seek
,gs.avg_total_user_cost
,gs.avg_user_impact
,d.[object_id]
,d.equality_columns
,d.inequality_columns
,d.included_columns
,d.statement as ‘Table’
FROM SYS.DM_DB_MISSING_INDEX_GROUPS
G JOIN SYS.DM_DB_MISSING_INDEX_GROUP_STATS GS
ON GS.GROUP_HANDLE = G.INDEX_GROUP_HANDLE
JOIN SYS.DM_DB_MISSING_INDEX_DETAILS D ON
G.INDEX_HANDLE = D.INDEX_HANDLE
order by DBName asc, (gs.unique_compiles + gs.user_seeks) desc

The result of the above query can be, for instance, the following:

If you take a look into that results, you can see the engine thinks only three indexes are missing (based on its own statistics) and it shows the columns that it should contain together with and compiles, user seeks and impact (improvement). Next, you can create those indexes and analyze your database performance.

Index Usage

Another interesting script I use is the following. It’ll show how your indexes are used on the database, ordering the results having the less used on top.

select
object_name(s.object_id) as ‘Table Name’,
s.object_id,
i.name as ‘Index Name’,
i.index_id as ‘Index ID’,
user_seeks,
user_scans,
user_lookups,
user_updates
from sys.dm_db_index_usage_stats s, sys.indexes i
where objectproperty(s.object_id,’IsUserTable’) = 1
and i.object_id = s.object_id
and i.index_id = s.index_id
order by (user_seeks + user_scans + user_lookups + user_updates) asc

If you run this on your database, you can see how many times your indexes were used, and that will give you an idea of the less used ones. If you analyze that and compare it with the cost of having that index in place, you can end up removing indexes or moving the most used to its own disk system, for instance.

Unused Indexes

Finally, I run the following script to get the list of indexes that we not used at all since the server started:

SELECT
OBJECT_NAME(B.OBJECT_ID) AS TableName,
B.NAME as IndexName
FROM SYS.OBJECTS A
INNER JOIN SYS.INDEXES B
ON A.OBJECT_ID = B.OBJECT_ID
WHERE NOT EXISTS (SELECT *
FROM SYS.DM_DB_INDEX_USAGE_STATS C
WHERE B.OBJECT_ID = C.OBJECT_ID
AND B.INDEX_ID = C.INDEX_ID)
AND A.TYPE ‘S’
ORDER BY 1, 2

You should take extra and special care on taking decisions based on this results, as it can harm your DB. Here it’s important that you have a lot of statistical information and that you have also some deep knowledge of the application and processes using that DB.
For instance, if you have a process that creates a quarterly report, which of course runs quarterly and use one index of that DB, and you analyze only one month of statistical information, that index will be shown as never used, and if you drop it, when your process runs to generate the report, it’ll not find the index so it may take longer, cost a lot more to the engine and / or even fail.

Conclusion

Indexes are very complex and require a lot of analysis of trained people. This DMVs will give you a lot of information, but people need to make a decision based on that information, so if it’s not well interpreted, then the result won’t be the expected one. Remember to always run the query that will benefit from the index creation before and after the creation of it, showing the Execution Time on both runs so you can compare if the index created really improves your performance. Also use the Database Engine Tuning Advisor (DTA) as that will also give you extra information on indexes, like if it needs to be clustered or non-clustered, the order, etc. This DMVs are not a replacement for the DTA but a complement.

Read Full Post »

During the last years I was involved in different projects, both as a system admin and as a DBA. When you participate from the beginning, it’s easy to do things right (or tend to do so), but the real problem may occur when you are involved in a project that is already deployed and running. The nightmare there resides on analyzing what was deployed initially, what was intended to be deployed and what’s the way to follow. You can’t imagine how many servers are misconfigured out there. Well, perhaps you can…

Sometimes inexperienced IT guys are told to install and configure one server and no proper guide is there to help them, or the person that do the capacity planning was relaying on previous experiences installing other systems than SQL, which leads to servers with poor performance on SQL. Another thing that you must deal with is the developers and the way they tend to work. They usually have a good understanding of performance but focused on their code. Not all of them think on the databases the way a DBA does, so commonly errors are repeated once and again, like concurrency, volume of data, filegroups, partitions, etc. Those errors are easy to avoid or mitigate with just an intervention on time of the DBA. I’ll try to highlight some of them on this article and will cover others on the following ones.

So, let’s finish the introduction and let’s focus on the real interesting stuff: Performance.

Disk System

When it comes to the server’s initial configuration, you should take a deep look into the disks configuration. One common mistake is the RAID option’s IT admins choose. Usually, the servers to run SQL are configured as an IIS or file system also, so the standard RAID 1 for the OS and a big RAID 5 to the database, IIS and whatever the server has is a temptation. Of course your server will run if you choose this configuration, but far away this is not the best option. Also, I saw some servers with the disks configured as a single RAID 5 array and then two or more partitions are created, simulating different disks. Again, this is a big mistake.

Of course you must configure your server based on the money you have to purchase it, but you should always consider the disks bottlenecks and try to reduce them as much as you can spending the money appropriately. It’s better to lose some GBs but make your server runs faster, so when it comes to configure the server’s disks, you should try to configure at least one RAID 1 array for your OS, one RAID 1 array for your data files and one RAID 1 array for your log files. If money is not an issue, then I recommend using a RAID 10 for your SQL data files and another RAID 10 for the log files. Additionally, you should have a separate subsystem for your OS pagefile, another for your binaries and a separate disk for the tempdb. If you plan to have databases that will have heavy queries, used as transactional and/or for Datawarehouse, then you should consider also configuring an extra array for your indexes and perhaps one or more arrays for some of the biggest or more used tables.

One interesting aspect also is the disk partition alignment for Windows 2003 or prior. I won’t go into really deep details on this as there´s a great article by Jimmy May (http://blogs.msdn.com/jimmymay/archive/2008/10/14/disk-partition-alignment-for-sql-server-slide-deck.aspx) talking about the improvements in SQL (about %20~%30) only by correctly aligning the partitions. If you’re on Windows 2008 then you don’t need to change this, but anyway it’s worthy to take a look at that article.

Another thing to consider when you’re dimensioning your SQL server is the tempdb. Getting the right size for it is tough but not impossible. First of all, you should have one datafile per processor and make all datafiles the same size. Also, disable the autogrowth of the files. This is a way to reduce the contention on the tempdb database, but removing the autogrowth can lead into out of space on the filegroups, so you should definitely talk to the solution’s architect (if available) and discuss the usage of the tempdb by the DBs. If you can’t get that information, then you should rely on the development server (which you should have already) and see what’s the usage there. Having that info, you can estimate the size of the tempdb always leaving enough empty room on it, and you should analyze how it’s impacted during the firsts weeks after the server’s Go-Live date in order to correct the size (if needed) before getting an issue because of space.

About the databases you’ll host, you should analyze the table’s structure and do an estimate based on the number of expected rows added per month or week. Then, do a projection of that, let’s say for a year or two, and there you’ll have a DB big enough host all the data without any growth of the datafiles. Again, this is hard to achieve but not impossible. You can estimate the row size of a table and then it’s as simple as multiplying that for the expected rows per day, week or month. That will give you the basic idea for proper sizing the DB. Also, look at your development or testing servers as they can reveal incredible information.

In all cases, the load you expect on your servers and the response time wanted or desired will determine the type of disk system that you should implement. If you can afford using solid state disks, then go for it, but for a lot of us who can’t get those yet, a proper capacity plan will save you a lot of headaches. Choosing the right disk configuration will depend on factors like money, but will also depend on how good you know the applications that will connect to your SQL, the databases that you’ll host and of course the expected volume of data to be handled. As a rule of thumb, you should always try to separate the different database and OS components into different arrays in order to reduce IO bottlenecks.

Read Full Post »

SQL Unit Testing

Introduction

Some time ago, I was asked to configure automatic builds in Team System for a solution, and that build should include as much unit tests as it could. This solution contained a mix of web sites, a wizard installer, OLAP cubes, Reporting Services reports and a set of databases. Because it was in the middle of a refactor, the development team would include tests together with the refactor on the web sites and the wizard installer had a set of unit tests for the installation modules. So I started to look for a way to test the deployed databases.

After searching for a while, I found a good tool in CodePlex called SQL Load Test. This tool has the ability of translating a SQL Profiler trace file into a Visual Studio 2005 C# file, generating unit tests for each entry in the trace file. This output file is intended to be added into a Test Project.
So I downloaded the tool and I read the usage notes: really simple. There are just a few considerations for setting up the SQL Profiler, which are:

Columns

The following columns must be present:

– EventClass
– TextData
– DatabaseName – if this is omitted then the trace is assumed to be all for the same database
– SPID
– Events

The following events must be present:

– SQL:BatchStarting
– RPC:Starting
– Audit Login
– Audit Logout

Additionally, the tool will ignore the following entries:

– All entries related to the master, model and msdb databases.
– All entries with no database name at all.
– All calls to sp_reset_connection.

So, after setting the profiler with the above requisites, I’ve asked a QA co-worker to navigate through a set of different Reporting Services’ reports. After some minutes (about 5 or 6 reports, no more than 10 minutes spent), I stopped the profiler and save the trace to a file.

Using the Unit Test

Now, the following step was using the tool and transforms that trace file into a C# unit test. Well, nothing could be as easier as that. If you read the documentation, you’ll see that the tool is a command-line application, and that only three parameters are needed. The first one is a name for the scenario. This is a C# style name. Then, the path and the name of the trace file. Finally (and optional), is the path and name of the configuration file. This configuration file is provided by the tool, and the settings are really simple, like creating a test method for each operation or just one with all the operations, etc. If you don’t set a custom configuration file, the application will load the default parameters, which are fine for the initial tests.

I run the tool and in a few seconds I had my C# file containing the tests. Then I added the file into my project, configure the SQL connection string, and Voila: I had added more than 500 unit tests in a few minutes. I repeated this with my co-worker for different reports and modules of the application, and after 2 hours I had more than 4000 unit tests, separated into modules (Reports, OLAP queries, transactional queries, etc).

For demonstrating the use of the tool, I have a Solution called SQLUnitTesting with two projects: One database project (SQLUnitTesting) and one test project (SQLUnitTesting_TestProject). The database project has 1 table and 1 stored procedure. The sp is used to insert one record in the table. Also, I’ve configured the profiler to monitor the database SQLUnitTesting, executed in Management Studio two queries (one insert command and the sp) and save the trace file as SQLUnitTesting.trc.

After this, I run the following command from the command line to convert the trace file into the C# unit test

As you can see, the first parameter is the name (I used the same as the test project namespace), the name of the trace file and the name of the config file. This file is the default file that comes with the tool.

Next, two files called SQLUnitTesting_TestProject.cs and SQLUnitTesting_TestProject.stubs are created on the application folder. The first one contains the unit tests and the second contains the connection details. You can discard the .stubs file and use only the .cs file. Then, I’ve renamed the .cs file to SQLUnitTesting_TestProject01.cs and add it to my test project:

Finally, you must set the connection string in order to connect to the SQL server. You can do this using the method you want. For this test, I’ve added the following lines on the TestMethod that call the unit tests:

_connection = new SqlConnection(“Initial Catalog=SQLUnitTesting;Integrated SSecurity=SSPI;Data Source=(local)”);
_connection.Open();

After all this is done, you can run the test from visual studio and see the results:


As you can see, the report indicates I run only one test. This is because the tool created all queries in a single TestMethod:

[TestMethod()]
public void SQLUnitTesting_TestProject()
{
Statement();
Statement2();
}

If you want to see one test per statement, then you can separate them into different TestMethods. Attached, you’ll see the .cs file for my project for better understanding of how the statements are executed

Conclusion

Although there are lots of tools down there for unit testing, and even though Visual Studio provides Unit Testing for SQL, I found in this tool a great value: The user variable. Because these tests are based on database usage, you can have real-life queries to databases. Suppose you can launch the profiler on the production server and leave it running for one or two days. Then, you’ll have tests that reflect what users do. In addition, you can configure the Visual Studio Test Load Agent and monitor the performance of your SQL server during the execution of the test, simulating concurrent connections and so on.

I know that this tool won’t replace the Unit Tests a developer should do on the database solution, and also that there are lots of tools that can achieve similar results, but this application is a great starting point, allowing the configuration of unit tests easily and without knowing the code behind your databases.

In my case, this tool has provided me a great way for managing integrity checks on the solution databases. Usually when a change is made on a complex database, a break can occur in the most unthinking procedure or query. At that time, the dev team was changed from one contractor to another one, so until the new devs really knew the application, this approach helped to avoid releasing a build that has errors on the databases.

Read Full Post »

Older Posts »