SAML Authentication support with SQL Reporting Services

Couple of days back I received the following set of questions from a customer and in this blog post I would like to discuss the answer to his question for the benefit of others

How to get SSRS working with SAML Claims Authentication?

If SSRS in Native Mode doesn’t support SAML Authentication how does Microsoft Dynamics CRM support SAML Token Authentication using SSRS in native mode?

Can we leverage Microsoft Dynamics CRM to allow SSRS to support SAML Authentication?

Let us first start with basics before we answer customer’s question here.

SSRS in Native Mode supports

  • Windows Authentication
  • Basic Authentication
  • Custom Authentication

SSRS in Sharepoint Integrated Mode

  • Classic Windows Authentication
  • Claims Authentication  (Starting Sharepoint 2010 & SQL 2012 SSRS since SSRS is available Shared Service in Sharepoint and further C2WTS can convert a STS Token to Windows token which can be authenticated by external data sources)

So we all are very clear that Sharepoint natively supports Claims & SSRS and hence SSRS in Sharepoint Integrated mode is one of the favorite solution to get SSRS working with SAML Claims which is tried and tested but Customer is looking for options outside Sharepoint to save some cost

Although SSRS in Native mode doesn’t support Claim Authentication out of the box, it does support Custom Authentication. Custom Authentication gives a developer flexibility to develop his own security extension to authenticate the user and further authorize the user permissions. So if we have good programmers available, we can integrate any custom web application which supports SAML token to integrate with SSRS in native mode by developing security extensions for custom authentication.

References

http://msdn.microsoft.com/en-us/library/ms155029.aspx

http://msftrsprodsamples.codeplex.com/wikipage?title=SS2008!Security%20Extension%20Sample

Microsoft Dynamics CRM Team has leveraged the custom authentication of SSRS in native mode by developing their SSRS Security Extension ( this should explain why you need to install SSRS extensions for Dynamic CRM)  to authenticate & authorize the users in SSRS. Further the Microsoft CRM Team leverages the APIs exposed by the Reporting Service Web Service to deploy, delivery, subscribe for the reports.

While using SSRS in Dynamic CRM, the security is completely controlled from within CRM and as the user & security roles are defined in CRM.

While using SSRS in CRM mode, some of the functionalities & features of SSRS may not be available for e.g you cannot have custom code within your SSRS Reports since CRM uses RDL Sandboxing which doesn’t support custom code

When it comes to building SSRS reports for Microsoft Dynamics CRM 2011 using Visual Studio (Business Intelligence Development Studio, a feature that can be installed as part of SQL Server), there are two options available that provide you the ability to query and format your CRM data into flexible dynamic reports. The options are SQL reports querying the CRM Database Filtered Views or using Fetch, a proprietary query language commonly referred to as FetchXML, this language utilizes the CRM Report Authoring Extension that is to be installed alongside Visual Studio’s Business Intelligence Development Studio.

Although you can use develop SSRS reports in Dynamics CRM, you have very limited functionality & features as compared SSRS in native mode or Sharepoint Mode which practically makes your SSRS deployment useless.

You can read the following blog from a fellow PFE on the challenges of custom report development in Dynamic CRM

http://blogs.msdn.com/b/crminthefield/archive/2012/11/27/custom-reporting-in-microsoft-dynamics-crm-fetch-vs-filtered-views.aspx

To answer customer question

  • Technically we can use SSRS with Microsoft CRM to support SAML but it will be available with restricted functionality which customer should be ready to accept
  • Approach 2: would be develop security extension for SSRS to support SAML but this would require skilled resources and would involve lot of efforts in developing & testing the security extension.
  • Preferred Approach would be Sharepoint 2010-2013 and SSRS 2012 which seamlessly supports SAML with all the SSRS functionality and further with SSRS 2012 you can set the execution context while using stored credentials which can eliminate the pains of Kerberos authentication and make life easier. Further SSRS in Sharepoint Integrated Mode is supported by Standard Edition of Sharepoint.

Hope this helps !!!

Parikshit Savjani
Premier Field Engineer

Analyze SQL Database Table Space Usage using TreeMap Visualization in Excel 2013

One of fascinating features of Excel 2013 is “App for Office” option which exposes us to some of the great apps from Office store which you can download and install. One of such useful apps which I came across is TreeMap Data Visualization developed by Microsoft Research which allows you to represent hierarchical data of the dimensions  in the form of nested rectangles.

If you have used WindirStat to monitor space of your drive, you exactly know whatTreeMap looks like and how useful it is for hierarchical data.

TreeMap allows to plot two Measures simultaneously via Size and Color controls. Such Visualization can be useful in BI World when you want to compare hierarchical dimension members with two measures simultaneously .

In my example below, I am using TreeMap to monitor Table Space Usage for a Database viz Adventureworks2012 so that  by looking at the Treemap I can find out the table with large number of rows and which has highest unused free space which can released or reclaimed.

As we know, sp_spaceused in SQL Server helps me find the number of rows and unused space within the table, I use the following Stored Procedure to use sp_spaceused and cursor to query the stats from each table and load it into a separate table called TableStats

USE [AdventureWorks2012]
GO
/****** Object:  Table [dbo].[TableStats]    Script Date: 9/1/2013 5:53:58 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[TableStats](
	[tableName] [varchar](100) NULL,
	[numberofRows] [varchar](100) NULL,
	[reservedSize] [varchar](50) NULL,
	[dataSize] [varchar](50) NULL,
	[indexSize] [varchar](50) NULL,
	[unusedSize] [varchar](50) NULL
) ON [PRIMARY]
GO

CREATE PROCEDURE [dbo].[GetTableSpaceUsage] 
AS 
BEGIN

      DECLARE @TableName VARCHAR(100)
      DECLARE @SchemaName VARCHAR(100)    
      DECLARE @FullName VARCHAR(100) 
      TRUNCATE TABLE TableStats   

      DECLARE tableCursor CURSOR 
      FOR  select [name] from sys.objects  
      where  OBJECTPROPERTY(object_id, N'IsUserTable') = 1 FOR READ ONLY 

      DECLARE SchemaCursor CURSOR 
      FOR  select SCHEMA_NAME(schema_id) from sys.objects  
      where  OBJECTPROPERTY(object_id, N'IsUserTable') = 1 FOR READ ONLY 

      OPEN tableCursor 
      OPEN SchemaCursor
      FETCH NEXT FROM tableCursor INTO @TableName 
      FETCH NEXT FROM SchemaCursor INTO @SchemaName 

      WHILE (@@Fetch_Status >= 0) 
      BEGIN
           SET @FullName=@SchemaName+'.'+@TableName 
           INSERT  TableStats         
           EXEC sp_spaceused @FullName     --Get the next table name     
           FETCH NEXT FROM tableCursor INTO @TableName 
           FETCH NEXT FROM SchemaCursor INTO @SchemaName 
      END 

      CLOSE tableCursor 
      DEALLOCATE tableCursor 
      CLOSE SchemaCursor 
      DEALLOCATE SchemaCursor

END

Once you run the above Stored Procedure it executes and loads the Table TableStats. Next Lets go to our Reporting Tool viz Excel 2013. In Excel 2013, Click on the Data and if you already don’t have existing connection to the Sql instance and database, create a new connection as shown below

ExcelData

You can use the following query to fetch the number rows and unused space from Table Stats. The unused space returned from the above stored procedure in string format as it also attaches KB at the end of the number. So we use the following query to get rid of the KB while importing

SELECT tablename, numberofrows, LEFT(unusedspace,LEN(unusedspace)-2) AS unused 
From TableStats

Once the data is loaded in Excel, you can go to the INSERT tab in Excel 2013 and click App for Office tab and add TreeMap to your Apps, Once you do that you can view the TreeMap view as shown below

AppsForOffice

In the Name List, you can select the tablename column from the data imported, for the size select the numberofRows column while for the Color select the Unused column.

You can add title to the TreeMap and hide the table and Gridlines following which you will see a great TreeMap as shown below

TreeMap

 

From the above TreeMap, it becomes very easy to visualize and identify that SalesOrderDetail with 121317 Number of Rows and while Person Table has the highest unused space of 2128 KB which makes it a good candidate to shrink.

There is another great post of TreeMap by Chris Webb who has shown the use of TreeMap and PowerQuery here.

If you would like to download the Excel Workbook, you can download it from here.

Hope this helps !!!

Parikshit Savjani
Premier Field Engineer

 

Ebook Review: Windows Azure SQL Reporting Succintly by Stacia Misner

I recently came across a very useful set of free ebooks and great developer resources from syncfusion which you can see it for yourself here . Being an MS BI Developer, windows azure sql reporting succintly by Stacia Misner grabbed my attention and I immediately downloaded the ebook and started reading through it.

I had started my learning on SQL Reporting Services using Stacia Misner’s SQL 2008 Reporting Services book and since then I am a big fan of her books with simple, easy to understand language coupled with good examples to explain the concepts.

The best part of this book is, it is a small book with 100 pages as I kind off refrain myself from reading long e-books since we live in busy world and it is difficult to take the time out from our daily routine. Although it is a short book but it is precise and covers most functional aspects of SQL Reporting on Windows Azure.

The Ebooks starts well by comparing the difference between SQL Reporting Services (On Premise) with SQL Reporting on Windows Azure, and further covers little bit of SQL Azure Database and Database Migration to SQL Azure which is required to setup an environment required for Windows Azure SQL Reporting.

The book is pretty comprehensive covering topics from Reporting Design, Development, Management, Security and completely covers all aspects of Report Development Life Cycle (RDLC) and useful for an individual to get Level 200 -300 knowledge on Windows Azure SQL Reporting if you are planning to use Windows Azure SQL Reporting as a platform to host your reports on clouds.

There are some other good developer resources available for free download which you can find in the following link

Download Developer Resource (Ebooks) from SyncFusion

 

Happy Learning !!!

Hope this Helps !!!

Parikshit Savjani
Premier Field Engineer

How to provision Power BI App to your Office 365 Enterprise portal

With an excellent passionate & energetic presentation on Power BI by Amir Netz in WPC, there has been a lot of excitement and buzz for power bi within the BI Community (In case you haven’t watched the video yet, you can find it here ).  The community is now eagerly waiting to lay their hands on power bi preview.

With the release of Power BI app in Windows 8 app Store which is mobile app for Windows 8 device to view the Power BI Reports hosted on Power BI site in O365, there was a lot of confusion in the community which was clarified by AJ Mee in his blog and a video.

After installing the Power BI app in myWindows 8 device, I was more eager to upload some of the powerview reports designed by me in Excel 2013 to the Power BI site which is currently available only with O365 Enterprise Edition only which you can opt for Trial version as mentioned by AJ Mee in his video.

So I decided to try out the Enterprise Edition of O365 and try to provision Power BI site in my O365 portal, however I found that it was not easy to configure the Power BI site by yourself and I encountered the same issue as couple of community members were hitting as reported in the MSDN forum here

Luckily I was able to find an official documentation which was hidden somewhere in office.microsoft.com and not so easily accessible but luckily I was able to find it and this word document is a great help to configure and provision Power BI App for your Office 365 Sharepoint Portal.

After following the steps mentioned in this document Power BI – Provisioning Guide , I was able to configure the Power BI site for my O365 Sharepoint and was able to upload my Power View Excel files which could be further viewed by Power BI app on my Windows 8 device.

If you are keen on learning more on Power BI, you can find some great collection of resources put together by Vivek on Technet Wikis here

Hope this helps other community members !!!

Parikshit Savjani
Premier Field Engineer

SSIS Data Flow Task with Export to Excel 2007 xlsx Destination doesn’t retain the datatype from Source

In this blog post, I would like to address one typical limitation in SSIS Data Flow task which many of us might have experienced when you export the data to Excel 2007 (xlsx) Destination which has also been reported in the following connect item

https://connect.microsoft.com/SQLServer/feedback/details/477970/excel-data-types-from-ssis

It is actually a limitation of ACE Provider in the way it detects datatype for the excel destination thereby not preserving the datatype from the source.

In order to explain the issue, let me take a simple repro of the issue as shown below.

  • Consider a simple Data Flow task as shown below               

ssisdataflow

  • For this repro, I have a used an OLEDB Source as SQL Server, master database with following query

select name,crdate
from sysdatabases

  • I have used Multicast Operator since I wanted to broadcast the output to two Excel Destinations viz Excel 2003 Destination v/s Excel 2007 Destination.
  • I have created 2 Excel Destination to compare the output for Excel 2003 Destination (xls) format as compared to Excel 2007 Destination (xlsx) format.

When I open the Excel 2003 File, the output is as shown below (which is the expected and desired outcome).

excel2003png

When I open the Excel 2007 File, the output is as shown below (which is unexpected and undesired)

excel2007

This can be annoying since one has format when you large number of columns of the type date, number, currency etc.  For the logical prespective, an obvious question would how does it work for Excel 2003 format and why it doesn’t work for Excel 2007 format which is supposed to be an advanced version. What has changed ?

The change is, the former use Jet Engine provider to export to the data to excel which has the ability to detect and preserve the data type from the source to the excel destination whereas the latter uses ACE Provider to export the data which uses different logic to detect the data type suitable for Excel destination and hence the issue.

If the Excel 2007 Destination File has atleast one row in the file, the ACE provider detects the data type of the Header row and uses the same datatype to populate the rest of the rows in the file.  We can use this behavior of ACE Provider to work around the issue.

So what options we have to work around the issue.

  1. If suitable, use Excel 2003 Destination to produce an xls file.
  1. Pre-create an excel destination file with a header row with dummy data and correct identified data types which serves a template for the ACE Provider for the rest of the rows. You can hide the first row

dummyrow

  You can right click on the dummy row and click Hide to hide the row from the end user

If hiding of the row doesn’t appeal you,  you can use Execute SQL Task following the Data Flow Task which loads the Excel destination to update the header row will null values after populating the data into the Excel Destination as shown below

updatenull

One obvious thought would be, rather than updating it to null value, shouldn’t we delete the header rows using delete statement in Execute SQL Task. However Delete SQL statement is not supported for Excel connection type as mentioned in the following KB

http://support.microsoft.com/kb/257819

However still if updating the header row to null values doesn’t convince, you can use the following visual c# code in SSIS Script task, to delete the header row.

However the following code references the Excel Interop dlls which are available by installing Primary Interop assemblies for Excel but it inturn requires you to install Office on the server (which may or may not be feasible for everyone)

using System;
using System.Data;
using Microsoft.SqlServer.Dts.Runtime;
using System.Windows.Forms;
using Excel = Microsoft.Office.Interop.Excel;

public void Main()
{
// TODO: Add your code here

Dts.TaskResult = (int)ScriptResults.Success;
Excel.Application xlApp = new Excel.Application();
Excel.Workbook xlWorkBook;
Excel.Worksheet xlWorkSheet;

xlWorkBook = xlApp.Workbooks.Open("C:/Users/pariks/Documents/ExcelExportIssue.xlsx", 0, true, 5, "", "", true, Microsoft.Office.Interop.Excel.XlPlatform.xlWindows, "\t", false, false, 0, true, 1, 0);
xlWorkSheet = (Excel.Worksheet)xlWorkBook.Worksheets[1];
Excel.Range rng = xlWorkSheet.UsedRange;

((Excel.Range)xlWorkSheet.Rows[1]).EntireRow.Delete(Excel.XlDirection.xlUp);
rng.EntireRow.Delete(Excel.XlDirection.xlUp);

 xlWorkBook.SaveAs("C:/Powerpivot/ExportExcelIssueResolved.xlsx", Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing,

Microsoft.Office.Interop.Excel.XlSaveAsAccessMode.xlExclusive,
Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing);
xlApp.Quit();
}

Hope this helps !!!

Parikshit Savjani
Premier Field Engineer

Web Application Analytics using IIS Logs & Powerview

I have been a big fan of Powerview for its adhoc reporting capability and visualizations since its release. With Excel 2013, Powerview integrated natively with excel makes Excel even more powerful giving it adhoc reporting and data exploration capability.

Technically, we all need BI in some form or the other and hence we all perform analytics of data to deduce some useful information. While my colleague & friend  Amit Banerjee has shown the ability of Powerview to perform Report Server Execution Log Analytics and SQL Performance Analysis in his blog, I would like to show the use of Powerview to analyse IIS Log and use it for Web Analytics.

I have developed this solution while doing performance tuning to improve the response time of the PPS & SSRS Reports hosted on a Sharepoint portal accessed by intranet users & was very well appreciated by my customers as well.

We captured IIS Logs with additional information in IISW3C Format which was then imported in SQL Database with the help of Log Parser. The Data captured from IIS Logs was more than 1 million records for a week’s time which was every briskly imported in native Data Model in Powerpivot in Excel 2013.

Following are some of the Powerview reports which we designed using IIS Logs to perform Analytics on response time of the portal, number of users connecting the portal, most accessed site, browser used by user etc.

In this blog post, I will show couple of Powerview reports developed by me, will share more reports in future blog posts.

We observed that Default Landing Page of PPS was taking long time to load sporadically at 6AM everyday which was later on found to be App recycle time and hence the first load after app recycle had higher response.

DefaultPage

 

The following report gives me the information on the OS and Browser Statistics of the Client Users accessing the portal. The below stats informs us that majority of the users are on Windows 7 using IE7 to browse the Sharepoint portal.

 

Browser

 

Hope this help !!!

 

Parikshit Savjani
Premier Field Engineer

 

 

Performance Tuning of SSRS & the use of Emptypoint with SSRS charts

I would have never realized the relevance of the EmptyPoint property in the Chart Series until I hit the issue while performance tuning of one such SSRS report for a customer.

Here is the background,

The SSRS report we intend to tune consists of Line graph to measure the pageviews across months for different site. I have created the repro with dummy data in my environment. The report looks something similar as below

graph1

 

 

 

 

 

 

 

 

 

 

This report computes the count of the PageViews Grouped by Month which forms the X axis and Site which forms the series.

To performance tune the report we should avoid aggregating, grouping or sorting the data at the reporting layer since although SSRS allows flexibility to Group & Sort the data, the rendering of the report is rather faster when the data is grouped & aggregated from the data source.

If we bring Detailed data from the Data source & group it & sort it at the SSRS layer, we fetch large number of rows from the data source which also adds network traffic if SSRS is installed on the separate box and further grouping & sorting at the SSRS layer is slower which results into slow rendering performance of SSRS report.

The original Dataset query for the above report is as following

SELECT     Pageid, YEAR(REQUEST_DATE) AS YearView, DATENAME(MONTH, REQUEST_DATE) AS MonthView, MONTH(REQUEST_DATE) AS MonthSort,Site
FROM         PageViews
WHERE     (Pageid = 4128842) AND (REQUEST_DATE >= '07/01/2000') AND (REQUEST_DATE <'10/01/2009') 
ORDER BY REQUEST_DATE

query1

 

 

 
We  rewrote the query as follows, to aggregate and group the data in the Dataset  itself so we just need to plot it in Chart in SSRS Report

SELECT count(MONTH(REQUEST_DATE)) as Pageid, 
YEAR(REQUEST_DATE) AS YearView, 
DATENAME(MONTH, REQUEST_DATE) AS MonthView, 
MONTH(REQUEST_DATE) AS MonthSort, Site 
FROM PageViews 
WHERE (PageID = 4128842 ) AND (REQUEST_DATE >= '01/01/2009') AND (REQUEST_DATE <= '01/01/2010') 
group by YEAR(REQUEST_DATE), MONTH(REQUEST_DATE), DATENAME(MONTH, REQUEST_DATE), Site 
ORDER BY YearView,MonthView

query2

 

 

 

 

The best part of above query is, the query executes in 40ms as compared to 546 ms taken by previous query, it bring less data and hence lesser network traffic and more importantly we have grouped and aggregated data which SSRS needs to plot in chart and render it in report.

So we had a tuned query which bring aggregated data, hence we removed grouping and aggregation from SSRS Report. Now when we render the report, the report renders very fast, but we observed the following

graph2

 

 

 

 

 

 

 

 

 

 

We observe a break in the graph while plotting the line graph for the Site GOOGLE. We didn’t observe this when the data is grouped & aggregated at the SSRS layer. So lets understand why this happens, when I look back at the aggregated data fetched from Dataset we observe the following

Data

 

 

As seen in the above output, we didn’t have any PageViews for Site GOOGLE in the month of August and hence there is no data records for August for the Site GOOGLE which leads to the break in the line graph for GOOGLE.

So, how we deal with Empty Data points in the Datasets, that is where I discovered EMPTY Points Property in the Chart explained in the following article

http://msdn.microsoft.com/en-us/library/dd207051.aspx

So we changes the Emptypoint property to Zero under CustomAttributes in Chart Series Properties so that if the Data point is empty it should be treated as Zero

Property1

 

 

 

 

 

 

 

 

 

 

 

 

Next,  we set the Color property of the Empty point values to match the color of the series of the Line graph

property2

 

 

 

 

 

 

 

Next, set the Label property to display the value as Zero in the graph (Visible=True & UseValueasLabel=True)

Property3

 

 

 

 

 

 

 

 

 

 

Finally set the Marker Type to Auto as shown

PROPERTY4

 

 

 

 

After setting the CustomAttributes as EmptyPoint=Zero, Color,Label & MarkerType propeties, we were able to overcome the line breaks as shown below

graph3

 

 

 

 

 

 

 

 

 

 

Thus we were able tune the SSRS Report without impacting the end user viewing experience with the help of this silent hero EmptyPoint property

Hope this helps !!!

Parikshit Savjani
Premier Field Engineer

Long Blocking Chain Caused due to Compile Locks

Recently one of my customers were not able to achieve the desired throughput (transactions per sec) from their Java based OLTP application which is used to pump the data load in sql database

To identify the performance issue, we had setup a stress testing environment to simulate a user workload with think time of 2 sec. In Parallel, we captured SQLDIAG while generating the user load.

Further, we used SQL Nexus Reports to analyze the SQL DIAG and following is the snapshot of result and wait stats

sqlnexus1  From the Report, we clearly see the Head blocker is max blocking chain size had grown to 220899 with total blocking duration of 987 sec which comes to 16 mins.

The Blocking chain was severely impacting the throughput of the application.

As you see, the Head Blocker as well as the blocked session are waiting on Compile Locks.

In Microsoft SQL Server, only one copy of a stored procedure plan is generally in cache at a time. Enforcing this requires serialization of some parts of the compilation process, and this synchronization is accomplished in part by using compile locks. If many connections are concurrently running the same stored procedure and a compile lock must be obtained for that stored procedure every time that it is run, system process IDs (SPIDs) might begin to block one another as they each try to obtain an exclusive compile lock on the object.

I checked and confirmed that we were not hitting any of the issues mentioned in the KB http://support.microsoft.com/kb/263889

We were using two part naming convention as well the case of the SP call matched the SP definition case and hence above issues were ruled out.

So we captured profiler trace, while executing the Stored Procedure and we observed the following events in the profiler

Profiler

From, the above set of event  in profiler, we understand that after SP execution starts, we see an User Error Message (also referred as user exception) after which we see SP:Cachemiss event.

SP:CacheMiss event explain why does SP goes for compilation on each execution which inturn results in COMPILE lock blocking.

Now to understand why we receive SP:CacheMiss event, we check the User Error Message event received before CacheMiss event

Error: 156, Severity: 15, State: 1
Incorrect syntax near the keyword ‘SET’.
Executing SQL directly; no cursor.

The RPC Call captured in the profiler trace is

declare @p1 int
set @p1=0
declare @p2 int
set @p2=0
declare @p7 int
set @p7=0
exec sp_cursorprepexec @p1 output,@p2 output,NULL,N’EXEC ‘,16,8193,@p7 output
select @p1, @p2, @p7

When I execute the above query directly, I don’t see any error message as captured by Profiler above.

When we call the same SP using the Java application which uses server side cursor call sp_cursorprepexec it gives the above User Error Message event in profiler and hence SP:CacheMiss. Also the ‘SET’ keyword mentioned above in the error message is the first statement of the SP definition since the first statement in the SP is SET NOCOUNT ON, if I remove the SET option, I receive error message for DECLARE which then becomes the first statement of the SP definition.

So why do we receive the User Error Message in Profiler when we run the same SP using Java Application?

The issue was caused due to use of Server Side Cursor which has restrictions on execution of TSQL syntax as the SP which we were executing was primarily performing inserts while we were hitting one of these restriction. In order to resolve the issue we had to change the selectMethod setting in the JDBC driver connection string to selectMethod=direct.

After avoiding Server side cursor by setting selectMethod=direct, there were no User Exception Messages and hence no SP:CacheMiss (saw SP:Cachehit in profiler) which eliminated the COMPILE locking on the server and thereby we were able to see an increase in Transactions per second for the customer.

Hope this helps !!!

Parikshit Savjani
Premier Field Engineer

SQL Consolidation Considerations !!!

In modern times, with current focus of most organizations on saving costs by means of virtualization and consolidations., It becomes increasingly important for DBAs to ensure that help organization achieve this objective without compromising on the performance, scalability, availability & application compatibility.

In this blog post, I would like to give some pointers on the considerations to ensure predictable behavior from SQL Instance post consolidations

Application Compatibility

This is one of the most important and high impact considerations for the databases which you are planning for consolidation. For this we need to first take into account of the current Compatibility Level under Databases running. Each SQL Server instance can support only two previous Compatibility level of the databases. For e.g SQL 2012 supports only databases which are compatibility 90 & higher (90,100 & 110).

So if any of the databases are running under Compatibility Level 80 needs to be first upgraded and tested for latest database compatibility before considering it for consolidation.

In order to upgrade, we need to first perform a static TSQL Code Review to identify the incompatible or breaking or deprecated code. In addition to it, we need to use profiler deprecated features or Xevents to identify the deprecated TSQL code at run time. Further run Upgrade Advisor to ensure we are hitting any issues to be fixed before upgrade.

This is where majority of the Developers time will be spend and can delay the Consolidation Project.

In addition to this, it is important to identify any specific application dependency for e.g Jobs, Logins, Collation requirements, Linked Servers, access to file system, files etc. In short, identify any application dependency which lies outside the database.

Configuration Requirements

Each Application might have certain specific configuration requirements which might be conflicting with the requirements of other databases. For e.g Biztalk databases has mandate for MAXDOP 1 else we start seeing Deadlocks in the application. However MAXDOP 1 leads to serial execution of queries and hence might impact the performance of the other databases which were performing well with parallelism.

If there are conflicting requirements, you might consider splitting it into multiple instances of SQL Server on the same server. Further is it important to understand if there is specific Configuration requirement required from OS level or SQL level.

Storage Requirements

It is important to understand the Storage requirement for the database and amount of data added per growth for Capacity Planning. You can use MDW Disk Usage Reports to identify Data MBs added per day to estimate the storage required

IO Requirements

To gather the IO requirements, you can use perfmon counters on all the instance viz

Process (sqlservr)
IO Data Bytes/sec
IO Data Operations/sec
IO Read Bytes/sec
IO Data Read Operations/sec
IO Write Bytes/sec
IO Data Write Operations/sec
Physical Disk
Disk Bytes/sec
Disk Read Bytes/sec
Disk Write Bytes/sec

Once you gather the collective IO workload, you can present the data to the storage vendor who can estimate the number of Disks required to support the IOPs from all the database collective to ensure the Disk Latency is no more than 10-15 ms.

Memory Utilization

If you are looking for consolidation of few databases, you can use sys.dm_os_buffer_descriptors and sys.dm_exec_cached_plans to identify the memory footprint consumed by each databases in the current instance.

 

You can use the following query to identify the same

SELECT DB_NAME(database_id),count(page_id)*8/1024 As [DataPagesInMB] FROM sys.dm_os_buffer_descriptors
group by database_id

SELECT DB_NAME(dbid),SUM(size_in_bytes/1024/1024) As [ProcCacheInMB)
FROM sys.dm_exec_cached_plans
cross apply sys.dm_exec_sql_text(plan_handle)
group by dbid

The sum of the memory will only give the Data/Index Pages and Object Cache requirements for the databases. But there are other memory components like connections, locks which are not accounted here. Those would be difficult to estimate and has to predicted.

However if you are planning to consolidate entire SQL Instance, You can use the total pages & Working Set, Private Bytes perfmon counter to estimate the memory requirements for the individual SQL Instance

Processor Utilization

The Processor Utilization data can be gather by using % Processor Time and Process (sqlservr) -> % Processor Time counters which gives the avg CPU requirements for the given sql instance.

If you want to identify the MAX CPU Utilization per database, you can use Resource Governor to limit all the user connections for those database to specific Workload Group and Workload Pool and then you can monitor the max cpu usage for the particular Workload Group.

High Availability and Disaster Recovery Requirements (HADR)

Another Considerations would be to understand the RPO and RTO for the databases which needs to be consolidated. Based on the Availability & DR requirements, you can consider Clustering, Database Mirroring or Log Shipping for individual databases.

With SQL 2012, you can also consider to enable indirect checkpoints based on RPO and RTO requirements.

SQL MAP Tool Assessment would serve as a good starting point for SQL Consolidation Projects since using SQL MAP Toolkit, one can get the Inventory of number of SQL Instance in a particular environment, current version, current OS Version, Avg CPU, Memory and IO utilization.

In addition you can use MDW, Resource Governor, DMVs, Upgrade Advisors, Xevents, Profiler for more detailed data capture and estimation.

The following Excel Sheet is the quick summary of the steps one need to perform for SQL Consolidation Projects

Consolidation

 

Hope this helps !!!

Parikshit Savjani
Premier Field Engineer

SQL Reporting Services Performance Tuning Tips & Tricks

It has been really long time since I have blogged. However I will try to blog as often as I can.

Very Recently, I performed a SSRS Performance Review to tune performance reports for one of our customers. While analyzing some long running reports we identified some usual issues which may not be aware while we are designing the report or working on small dataset. But when the same report is migrated to Production DW with large dataset & with number of concurrent users fetching the report, the response time of the report degrades.

In the following presentation, I have listed the common tips and tricks to performance tune the reports

 

 

Hope this helps !!!

 

Parikshit Savjani
Premier Field Engineer