Feeds:
Posts
Comments

Archive for November, 2010

During the last years I was involved in different projects, both as a system admin and as a DBA. When you participate from the beginning, it’s easy to do things right (or tend to do so), but the real problem may occur when you are involved in a project that is already deployed and running. The nightmare there resides on analyzing what was deployed initially, what was intended to be deployed and what’s the way to follow. You can’t imagine how many servers are misconfigured out there. Well, perhaps you can…

Sometimes inexperienced IT guys are told to install and configure one server and no proper guide is there to help them, or the person that do the capacity planning was relaying on previous experiences installing other systems than SQL, which leads to servers with poor performance on SQL. Another thing that you must deal with is the developers and the way they tend to work. They usually have a good understanding of performance but focused on their code. Not all of them think on the databases the way a DBA does, so commonly errors are repeated once and again, like concurrency, volume of data, filegroups, partitions, etc. Those errors are easy to avoid or mitigate with just an intervention on time of the DBA. I’ll try to highlight some of them on this article and will cover others on the following ones.

So, let’s finish the introduction and let’s focus on the real interesting stuff: Performance.

Disk System

When it comes to the server’s initial configuration, you should take a deep look into the disks configuration. One common mistake is the RAID option’s IT admins choose. Usually, the servers to run SQL are configured as an IIS or file system also, so the standard RAID 1 for the OS and a big RAID 5 to the database, IIS and whatever the server has is a temptation. Of course your server will run if you choose this configuration, but far away this is not the best option. Also, I saw some servers with the disks configured as a single RAID 5 array and then two or more partitions are created, simulating different disks. Again, this is a big mistake.

Of course you must configure your server based on the money you have to purchase it, but you should always consider the disks bottlenecks and try to reduce them as much as you can spending the money appropriately. It’s better to lose some GBs but make your server runs faster, so when it comes to configure the server’s disks, you should try to configure at least one RAID 1 array for your OS, one RAID 1 array for your data files and one RAID 1 array for your log files. If money is not an issue, then I recommend using a RAID 10 for your SQL data files and another RAID 10 for the log files. Additionally, you should have a separate subsystem for your OS pagefile, another for your binaries and a separate disk for the tempdb. If you plan to have databases that will have heavy queries, used as transactional and/or for Datawarehouse, then you should consider also configuring an extra array for your indexes and perhaps one or more arrays for some of the biggest or more used tables.

One interesting aspect also is the disk partition alignment for Windows 2003 or prior. I won’t go into really deep details on this as there´s a great article by Jimmy May (http://blogs.msdn.com/jimmymay/archive/2008/10/14/disk-partition-alignment-for-sql-server-slide-deck.aspx) talking about the improvements in SQL (about %20~%30) only by correctly aligning the partitions. If you’re on Windows 2008 then you don’t need to change this, but anyway it’s worthy to take a look at that article.

Another thing to consider when you’re dimensioning your SQL server is the tempdb. Getting the right size for it is tough but not impossible. First of all, you should have one datafile per processor and make all datafiles the same size. Also, disable the autogrowth of the files. This is a way to reduce the contention on the tempdb database, but removing the autogrowth can lead into out of space on the filegroups, so you should definitely talk to the solution’s architect (if available) and discuss the usage of the tempdb by the DBs. If you can’t get that information, then you should rely on the development server (which you should have already) and see what’s the usage there. Having that info, you can estimate the size of the tempdb always leaving enough empty room on it, and you should analyze how it’s impacted during the firsts weeks after the server’s Go-Live date in order to correct the size (if needed) before getting an issue because of space.

About the databases you’ll host, you should analyze the table’s structure and do an estimate based on the number of expected rows added per month or week. Then, do a projection of that, let’s say for a year or two, and there you’ll have a DB big enough host all the data without any growth of the datafiles. Again, this is hard to achieve but not impossible. You can estimate the row size of a table and then it’s as simple as multiplying that for the expected rows per day, week or month. That will give you the basic idea for proper sizing the DB. Also, look at your development or testing servers as they can reveal incredible information.

In all cases, the load you expect on your servers and the response time wanted or desired will determine the type of disk system that you should implement. If you can afford using solid state disks, then go for it, but for a lot of us who can’t get those yet, a proper capacity plan will save you a lot of headaches. Choosing the right disk configuration will depend on factors like money, but will also depend on how good you know the applications that will connect to your SQL, the databases that you’ll host and of course the expected volume of data to be handled. As a rule of thumb, you should always try to separate the different database and OS components into different arrays in order to reduce IO bottlenecks.

Read Full Post »