In modern times, with current focus of most organizations on saving costs by means of virtualization and consolidations., It becomes increasingly important for DBAs to ensure that help organization achieve this objective without compromising on the performance, scalability, availability & application compatibility.

In this blog post, I would like to give some pointers on the considerations to ensure predictable behavior from SQL Instance post consolidations

Application Compatibility

This is one of the most important and high impact considerations for the databases which you are planning for consolidation. For this we need to first take into account of the current Compatibility Level under Databases running. Each SQL Server instance can support only two previous Compatibility level of the databases. For e.g SQL 2012 supports only databases which are compatibility 90 & higher (90,100 & 110).

So if any of the databases are running under Compatibility Level 80 needs to be first upgraded and tested for latest database compatibility before considering it for consolidation.

In order to upgrade, we need to first perform a static TSQL Code Review to identify the incompatible or breaking or deprecated code. In addition to it, we need to use profiler deprecated features or Xevents to identify the deprecated TSQL code at run time. Further run Upgrade Advisor to ensure we are hitting any issues to be fixed before upgrade.

This is where majority of the Developers time will be spend and can delay the Consolidation Project.

In addition to this, it is important to identify any specific application dependency for e.g Jobs, Logins, Collation requirements, Linked Servers, access to file system, files etc. In short, identify any application dependency which lies outside the database.

Configuration Requirements

Each Application might have certain specific configuration requirements which might be conflicting with the requirements of other databases. For e.g Biztalk databases has mandate for MAXDOP 1 else we start seeing Deadlocks in the application. However MAXDOP 1 leads to serial execution of queries and hence might impact the performance of the other databases which were performing well with parallelism.

If there are conflicting requirements, you might consider splitting it into multiple instances of SQL Server on the same server. Further is it important to understand if there is specific Configuration requirement required from OS level or SQL level.

Storage Requirements

It is important to understand the Storage requirement for the database and amount of data added per growth for Capacity Planning. You can use MDW Disk Usage Reports to identify Data MBs added per day to estimate the storage required

IO Requirements

To gather the IO requirements, you can use perfmon counters on all the instance viz

Process (sqlservr)
IO Data Bytes/sec
IO Data Operations/sec
IO Read Bytes/sec
IO Data Read Operations/sec
IO Write Bytes/sec
IO Data Write Operations/sec
Physical Disk
Disk Bytes/sec
Disk Read Bytes/sec
Disk Write Bytes/sec

Once you gather the collective IO workload, you can present the data to the storage vendor who can estimate the number of Disks required to support the IOPs from all the database collective to ensure the Disk Latency is no more than 10-15 ms.

Memory Utilization

If you are looking for consolidation of few databases, you can use sys.dm_os_buffer_descriptors and sys.dm_exec_cached_plans to identify the memory footprint consumed by each databases in the current instance.

 

You can use the following query to identify the same

SELECT DB_NAME(database_id),count(page_id)*8/1024 As [DataPagesInMB] FROM sys.dm_os_buffer_descriptors
group by database_id

SELECT DB_NAME(dbid),SUM(size_in_bytes/1024/1024) As [ProcCacheInMB)
FROM sys.dm_exec_cached_plans
cross apply sys.dm_exec_sql_text(plan_handle)
group by dbid

The sum of the memory will only give the Data/Index Pages and Object Cache requirements for the databases. But there are other memory components like connections, locks which are not accounted here. Those would be difficult to estimate and has to predicted.

However if you are planning to consolidate entire SQL Instance, You can use the total pages & Working Set, Private Bytes perfmon counter to estimate the memory requirements for the individual SQL Instance

Processor Utilization

The Processor Utilization data can be gather by using % Processor Time and Process (sqlservr) -> % Processor Time counters which gives the avg CPU requirements for the given sql instance.

If you want to identify the MAX CPU Utilization per database, you can use Resource Governor to limit all the user connections for those database to specific Workload Group and Workload Pool and then you can monitor the max cpu usage for the particular Workload Group.

High Availability and Disaster Recovery Requirements (HADR)

Another Considerations would be to understand the RPO and RTO for the databases which needs to be consolidated. Based on the Availability & DR requirements, you can consider Clustering, Database Mirroring or Log Shipping for individual databases.

With SQL 2012, you can also consider to enable indirect checkpoints based on RPO and RTO requirements.

SQL MAP Tool Assessment would serve as a good starting point for SQL Consolidation Projects since using SQL MAP Toolkit, one can get the Inventory of number of SQL Instance in a particular environment, current version, current OS Version, Avg CPU, Memory and IO utilization.

In addition you can use MDW, Resource Governor, DMVs, Upgrade Advisors, Xevents, Profiler for more detailed data capture and estimation.

The following Excel Sheet is the quick summary of the steps one need to perform for SQL Consolidation Projects

Consolidation

 

Hope this helps !!!

Parikshit Savjani
Premier Field Engineer

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *