How to provision Power BI App to your Office 365 Enterprise portal

With an excellent passionate & energetic presentation on Power BI by Amir Netz in WPC, there has been a lot of excitement and buzz for power bi within the BI Community (In case you haven’t watched the video yet, you can find it here ).  The community is now eagerly waiting to lay their hands on power bi preview.

With the release of Power BI app in Windows 8 app Store which is mobile app for Windows 8 device to view the Power BI Reports hosted on Power BI site in O365, there was a lot of confusion in the community which was clarified by AJ Mee in his blog and a video.

After installing the Power BI app in myWindows 8 device, I was more eager to upload some of the powerview reports designed by me in Excel 2013 to the Power BI site which is currently available only with O365 Enterprise Edition only which you can opt for Trial version as mentioned by AJ Mee in his video.

So I decided to try out the Enterprise Edition of O365 and try to provision Power BI site in my O365 portal, however I found that it was not easy to configure the Power BI site by yourself and I encountered the same issue as couple of community members were hitting as reported in the MSDN forum here

Luckily I was able to find an official documentation which was hidden somewhere in office.microsoft.com and not so easily accessible but luckily I was able to find it and this word document is a great help to configure and provision Power BI App for your Office 365 Sharepoint Portal.

After following the steps mentioned in this document Power BI – Provisioning Guide , I was able to configure the Power BI site for my O365 Sharepoint and was able to upload my Power View Excel files which could be further viewed by Power BI app on my Windows 8 device.

If you are keen on learning more on Power BI, you can find some great collection of resources put together by Vivek on Technet Wikis here

Hope this helps other community members !!!

Parikshit Savjani
Premier Field Engineer

SSIS Data Flow Task with Export to Excel 2007 xlsx Destination doesn’t retain the datatype from Source

In this blog post, I would like to address one typical limitation in SSIS Data Flow task which many of us might have experienced when you export the data to Excel 2007 (xlsx) Destination which has also been reported in the following connect item

https://connect.microsoft.com/SQLServer/feedback/details/477970/excel-data-types-from-ssis

It is actually a limitation of ACE Provider in the way it detects datatype for the excel destination thereby not preserving the datatype from the source.

In order to explain the issue, let me take a simple repro of the issue as shown below.

  • Consider a simple Data Flow task as shown below               

ssisdataflow

  • For this repro, I have a used an OLEDB Source as SQL Server, master database with following query

select name,crdate
from sysdatabases

  • I have used Multicast Operator since I wanted to broadcast the output to two Excel Destinations viz Excel 2003 Destination v/s Excel 2007 Destination.
  • I have created 2 Excel Destination to compare the output for Excel 2003 Destination (xls) format as compared to Excel 2007 Destination (xlsx) format.

When I open the Excel 2003 File, the output is as shown below (which is the expected and desired outcome).

excel2003png

When I open the Excel 2007 File, the output is as shown below (which is unexpected and undesired)

excel2007

This can be annoying since one has format when you large number of columns of the type date, number, currency etc.  For the logical prespective, an obvious question would how does it work for Excel 2003 format and why it doesn’t work for Excel 2007 format which is supposed to be an advanced version. What has changed ?

The change is, the former use Jet Engine provider to export to the data to excel which has the ability to detect and preserve the data type from the source to the excel destination whereas the latter uses ACE Provider to export the data which uses different logic to detect the data type suitable for Excel destination and hence the issue.

If the Excel 2007 Destination File has atleast one row in the file, the ACE provider detects the data type of the Header row and uses the same datatype to populate the rest of the rows in the file.  We can use this behavior of ACE Provider to work around the issue.

So what options we have to work around the issue.

  1. If suitable, use Excel 2003 Destination to produce an xls file.
  1. Pre-create an excel destination file with a header row with dummy data and correct identified data types which serves a template for the ACE Provider for the rest of the rows. You can hide the first row

dummyrow

  You can right click on the dummy row and click Hide to hide the row from the end user

If hiding of the row doesn’t appeal you,  you can use Execute SQL Task following the Data Flow Task which loads the Excel destination to update the header row will null values after populating the data into the Excel Destination as shown below

updatenull

One obvious thought would be, rather than updating it to null value, shouldn’t we delete the header rows using delete statement in Execute SQL Task. However Delete SQL statement is not supported for Excel connection type as mentioned in the following KB

http://support.microsoft.com/kb/257819

However still if updating the header row to null values doesn’t convince, you can use the following visual c# code in SSIS Script task, to delete the header row.

However the following code references the Excel Interop dlls which are available by installing Primary Interop assemblies for Excel but it inturn requires you to install Office on the server (which may or may not be feasible for everyone)

using System;
using System.Data;
using Microsoft.SqlServer.Dts.Runtime;
using System.Windows.Forms;
using Excel = Microsoft.Office.Interop.Excel;

public void Main()
{
// TODO: Add your code here

Dts.TaskResult = (int)ScriptResults.Success;
Excel.Application xlApp = new Excel.Application();
Excel.Workbook xlWorkBook;
Excel.Worksheet xlWorkSheet;

xlWorkBook = xlApp.Workbooks.Open("C:/Users/pariks/Documents/ExcelExportIssue.xlsx", 0, true, 5, "", "", true, Microsoft.Office.Interop.Excel.XlPlatform.xlWindows, "\t", false, false, 0, true, 1, 0);
xlWorkSheet = (Excel.Worksheet)xlWorkBook.Worksheets[1];
Excel.Range rng = xlWorkSheet.UsedRange;

((Excel.Range)xlWorkSheet.Rows[1]).EntireRow.Delete(Excel.XlDirection.xlUp);
rng.EntireRow.Delete(Excel.XlDirection.xlUp);

 xlWorkBook.SaveAs("C:/Powerpivot/ExportExcelIssueResolved.xlsx", Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing,

Microsoft.Office.Interop.Excel.XlSaveAsAccessMode.xlExclusive,
Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing);
xlApp.Quit();
}

Hope this helps !!!

Parikshit Savjani
Premier Field Engineer

Web Application Analytics using IIS Logs & Powerview

I have been a big fan of Powerview for its adhoc reporting capability and visualizations since its release. With Excel 2013, Powerview integrated natively with excel makes Excel even more powerful giving it adhoc reporting and data exploration capability.

Technically, we all need BI in some form or the other and hence we all perform analytics of data to deduce some useful information. While my colleague & friend  Amit Banerjee has shown the ability of Powerview to perform Report Server Execution Log Analytics and SQL Performance Analysis in his blog, I would like to show the use of Powerview to analyse IIS Log and use it for Web Analytics.

I have developed this solution while doing performance tuning to improve the response time of the PPS & SSRS Reports hosted on a Sharepoint portal accessed by intranet users & was very well appreciated by my customers as well.

We captured IIS Logs with additional information in IISW3C Format which was then imported in SQL Database with the help of Log Parser. The Data captured from IIS Logs was more than 1 million records for a week’s time which was every briskly imported in native Data Model in Powerpivot in Excel 2013.

Following are some of the Powerview reports which we designed using IIS Logs to perform Analytics on response time of the portal, number of users connecting the portal, most accessed site, browser used by user etc.

In this blog post, I will show couple of Powerview reports developed by me, will share more reports in future blog posts.

We observed that Default Landing Page of PPS was taking long time to load sporadically at 6AM everyday which was later on found to be App recycle time and hence the first load after app recycle had higher response.

DefaultPage

 

The following report gives me the information on the OS and Browser Statistics of the Client Users accessing the portal. The below stats informs us that majority of the users are on Windows 7 using IE7 to browse the Sharepoint portal.

 

Browser

 

Hope this help !!!

 

Parikshit Savjani
Premier Field Engineer

 

 

Performance Tuning of SSRS & the use of Emptypoint with SSRS charts

I would have never realized the relevance of the EmptyPoint property in the Chart Series until I hit the issue while performance tuning of one such SSRS report for a customer.

Here is the background,

The SSRS report we intend to tune consists of Line graph to measure the pageviews across months for different site. I have created the repro with dummy data in my environment. The report looks something similar as below

graph1

 

 

 

 

 

 

 

 

 

 

This report computes the count of the PageViews Grouped by Month which forms the X axis and Site which forms the series.

To performance tune the report we should avoid aggregating, grouping or sorting the data at the reporting layer since although SSRS allows flexibility to Group & Sort the data, the rendering of the report is rather faster when the data is grouped & aggregated from the data source.

If we bring Detailed data from the Data source & group it & sort it at the SSRS layer, we fetch large number of rows from the data source which also adds network traffic if SSRS is installed on the separate box and further grouping & sorting at the SSRS layer is slower which results into slow rendering performance of SSRS report.

The original Dataset query for the above report is as following

SELECT     Pageid, YEAR(REQUEST_DATE) AS YearView, DATENAME(MONTH, REQUEST_DATE) AS MonthView, MONTH(REQUEST_DATE) AS MonthSort,Site
FROM         PageViews
WHERE     (Pageid = 4128842) AND (REQUEST_DATE >= '07/01/2000') AND (REQUEST_DATE <'10/01/2009') 
ORDER BY REQUEST_DATE

query1

 

 

 
We  rewrote the query as follows, to aggregate and group the data in the Dataset  itself so we just need to plot it in Chart in SSRS Report

SELECT count(MONTH(REQUEST_DATE)) as Pageid, 
YEAR(REQUEST_DATE) AS YearView, 
DATENAME(MONTH, REQUEST_DATE) AS MonthView, 
MONTH(REQUEST_DATE) AS MonthSort, Site 
FROM PageViews 
WHERE (PageID = 4128842 ) AND (REQUEST_DATE >= '01/01/2009') AND (REQUEST_DATE <= '01/01/2010') 
group by YEAR(REQUEST_DATE), MONTH(REQUEST_DATE), DATENAME(MONTH, REQUEST_DATE), Site 
ORDER BY YearView,MonthView

query2

 

 

 

 

The best part of above query is, the query executes in 40ms as compared to 546 ms taken by previous query, it bring less data and hence lesser network traffic and more importantly we have grouped and aggregated data which SSRS needs to plot in chart and render it in report.

So we had a tuned query which bring aggregated data, hence we removed grouping and aggregation from SSRS Report. Now when we render the report, the report renders very fast, but we observed the following

graph2

 

 

 

 

 

 

 

 

 

 

We observe a break in the graph while plotting the line graph for the Site GOOGLE. We didn’t observe this when the data is grouped & aggregated at the SSRS layer. So lets understand why this happens, when I look back at the aggregated data fetched from Dataset we observe the following

Data

 

 

As seen in the above output, we didn’t have any PageViews for Site GOOGLE in the month of August and hence there is no data records for August for the Site GOOGLE which leads to the break in the line graph for GOOGLE.

So, how we deal with Empty Data points in the Datasets, that is where I discovered EMPTY Points Property in the Chart explained in the following article

http://msdn.microsoft.com/en-us/library/dd207051.aspx

So we changes the Emptypoint property to Zero under CustomAttributes in Chart Series Properties so that if the Data point is empty it should be treated as Zero

Property1

 

 

 

 

 

 

 

 

 

 

 

 

Next,  we set the Color property of the Empty point values to match the color of the series of the Line graph

property2

 

 

 

 

 

 

 

Next, set the Label property to display the value as Zero in the graph (Visible=True & UseValueasLabel=True)

Property3

 

 

 

 

 

 

 

 

 

 

Finally set the Marker Type to Auto as shown

PROPERTY4

 

 

 

 

After setting the CustomAttributes as EmptyPoint=Zero, Color,Label & MarkerType propeties, we were able to overcome the line breaks as shown below

graph3

 

 

 

 

 

 

 

 

 

 

Thus we were able tune the SSRS Report without impacting the end user viewing experience with the help of this silent hero EmptyPoint property

Hope this helps !!!

Parikshit Savjani
Premier Field Engineer

Long Blocking Chain Caused due to Compile Locks

Recently one of my customers were not able to achieve the desired throughput (transactions per sec) from their Java based OLTP application which is used to pump the data load in sql database

To identify the performance issue, we had setup a stress testing environment to simulate a user workload with think time of 2 sec. In Parallel, we captured SQLDIAG while generating the user load.

Further, we used SQL Nexus Reports to analyze the SQL DIAG and following is the snapshot of result and wait stats

sqlnexus1  From the Report, we clearly see the Head blocker is max blocking chain size had grown to 220899 with total blocking duration of 987 sec which comes to 16 mins.

The Blocking chain was severely impacting the throughput of the application.

As you see, the Head Blocker as well as the blocked session are waiting on Compile Locks.

In Microsoft SQL Server, only one copy of a stored procedure plan is generally in cache at a time. Enforcing this requires serialization of some parts of the compilation process, and this synchronization is accomplished in part by using compile locks. If many connections are concurrently running the same stored procedure and a compile lock must be obtained for that stored procedure every time that it is run, system process IDs (SPIDs) might begin to block one another as they each try to obtain an exclusive compile lock on the object.

I checked and confirmed that we were not hitting any of the issues mentioned in the KB http://support.microsoft.com/kb/263889

We were using two part naming convention as well the case of the SP call matched the SP definition case and hence above issues were ruled out.

So we captured profiler trace, while executing the Stored Procedure and we observed the following events in the profiler

Profiler

From, the above set of event  in profiler, we understand that after SP execution starts, we see an User Error Message (also referred as user exception) after which we see SP:Cachemiss event.

SP:CacheMiss event explain why does SP goes for compilation on each execution which inturn results in COMPILE lock blocking.

Now to understand why we receive SP:CacheMiss event, we check the User Error Message event received before CacheMiss event

Error: 156, Severity: 15, State: 1
Incorrect syntax near the keyword ‘SET’.
Executing SQL directly; no cursor.

The RPC Call captured in the profiler trace is

declare @p1 int
set @p1=0
declare @p2 int
set @p2=0
declare @p7 int
set @p7=0
exec sp_cursorprepexec @p1 output,@p2 output,NULL,N’EXEC ‘,16,8193,@p7 output
select @p1, @p2, @p7

When I execute the above query directly, I don’t see any error message as captured by Profiler above.

When we call the same SP using the Java application which uses server side cursor call sp_cursorprepexec it gives the above User Error Message event in profiler and hence SP:CacheMiss. Also the ‘SET’ keyword mentioned above in the error message is the first statement of the SP definition since the first statement in the SP is SET NOCOUNT ON, if I remove the SET option, I receive error message for DECLARE which then becomes the first statement of the SP definition.

So why do we receive the User Error Message in Profiler when we run the same SP using Java Application?

The issue was caused due to use of Server Side Cursor which has restrictions on execution of TSQL syntax as the SP which we were executing was primarily performing inserts while we were hitting one of these restriction. In order to resolve the issue we had to change the selectMethod setting in the JDBC driver connection string to selectMethod=direct.

After avoiding Server side cursor by setting selectMethod=direct, there were no User Exception Messages and hence no SP:CacheMiss (saw SP:Cachehit in profiler) which eliminated the COMPILE locking on the server and thereby we were able to see an increase in Transactions per second for the customer.

Hope this helps !!!

Parikshit Savjani
Premier Field Engineer

SQL Consolidation Considerations !!!

In modern times, with current focus of most organizations on saving costs by means of virtualization and consolidations., It becomes increasingly important for DBAs to ensure that help organization achieve this objective without compromising on the performance, scalability, availability & application compatibility.

In this blog post, I would like to give some pointers on the considerations to ensure predictable behavior from SQL Instance post consolidations

Application Compatibility

This is one of the most important and high impact considerations for the databases which you are planning for consolidation. For this we need to first take into account of the current Compatibility Level under Databases running. Each SQL Server instance can support only two previous Compatibility level of the databases. For e.g SQL 2012 supports only databases which are compatibility 90 & higher (90,100 & 110).

So if any of the databases are running under Compatibility Level 80 needs to be first upgraded and tested for latest database compatibility before considering it for consolidation.

In order to upgrade, we need to first perform a static TSQL Code Review to identify the incompatible or breaking or deprecated code. In addition to it, we need to use profiler deprecated features or Xevents to identify the deprecated TSQL code at run time. Further run Upgrade Advisor to ensure we are hitting any issues to be fixed before upgrade.

This is where majority of the Developers time will be spend and can delay the Consolidation Project.

In addition to this, it is important to identify any specific application dependency for e.g Jobs, Logins, Collation requirements, Linked Servers, access to file system, files etc. In short, identify any application dependency which lies outside the database.

Configuration Requirements

Each Application might have certain specific configuration requirements which might be conflicting with the requirements of other databases. For e.g Biztalk databases has mandate for MAXDOP 1 else we start seeing Deadlocks in the application. However MAXDOP 1 leads to serial execution of queries and hence might impact the performance of the other databases which were performing well with parallelism.

If there are conflicting requirements, you might consider splitting it into multiple instances of SQL Server on the same server. Further is it important to understand if there is specific Configuration requirement required from OS level or SQL level.

Storage Requirements

It is important to understand the Storage requirement for the database and amount of data added per growth for Capacity Planning. You can use MDW Disk Usage Reports to identify Data MBs added per day to estimate the storage required

IO Requirements

To gather the IO requirements, you can use perfmon counters on all the instance viz

Process (sqlservr)
IO Data Bytes/sec
IO Data Operations/sec
IO Read Bytes/sec
IO Data Read Operations/sec
IO Write Bytes/sec
IO Data Write Operations/sec
Physical Disk
Disk Bytes/sec
Disk Read Bytes/sec
Disk Write Bytes/sec

Once you gather the collective IO workload, you can present the data to the storage vendor who can estimate the number of Disks required to support the IOPs from all the database collective to ensure the Disk Latency is no more than 10-15 ms.

Memory Utilization

If you are looking for consolidation of few databases, you can use sys.dm_os_buffer_descriptors and sys.dm_exec_cached_plans to identify the memory footprint consumed by each databases in the current instance.

 

You can use the following query to identify the same

SELECT DB_NAME(database_id),count(page_id)*8/1024 As [DataPagesInMB] FROM sys.dm_os_buffer_descriptors
group by database_id

SELECT DB_NAME(dbid),SUM(size_in_bytes/1024/1024) As [ProcCacheInMB)
FROM sys.dm_exec_cached_plans
cross apply sys.dm_exec_sql_text(plan_handle)
group by dbid

The sum of the memory will only give the Data/Index Pages and Object Cache requirements for the databases. But there are other memory components like connections, locks which are not accounted here. Those would be difficult to estimate and has to predicted.

However if you are planning to consolidate entire SQL Instance, You can use the total pages & Working Set, Private Bytes perfmon counter to estimate the memory requirements for the individual SQL Instance

Processor Utilization

The Processor Utilization data can be gather by using % Processor Time and Process (sqlservr) -> % Processor Time counters which gives the avg CPU requirements for the given sql instance.

If you want to identify the MAX CPU Utilization per database, you can use Resource Governor to limit all the user connections for those database to specific Workload Group and Workload Pool and then you can monitor the max cpu usage for the particular Workload Group.

High Availability and Disaster Recovery Requirements (HADR)

Another Considerations would be to understand the RPO and RTO for the databases which needs to be consolidated. Based on the Availability & DR requirements, you can consider Clustering, Database Mirroring or Log Shipping for individual databases.

With SQL 2012, you can also consider to enable indirect checkpoints based on RPO and RTO requirements.

SQL MAP Tool Assessment would serve as a good starting point for SQL Consolidation Projects since using SQL MAP Toolkit, one can get the Inventory of number of SQL Instance in a particular environment, current version, current OS Version, Avg CPU, Memory and IO utilization.

In addition you can use MDW, Resource Governor, DMVs, Upgrade Advisors, Xevents, Profiler for more detailed data capture and estimation.

The following Excel Sheet is the quick summary of the steps one need to perform for SQL Consolidation Projects

Consolidation

 

Hope this helps !!!

Parikshit Savjani
Premier Field Engineer

SQL Reporting Services Performance Tuning Tips & Tricks

It has been really long time since I have blogged. However I will try to blog as often as I can.

Very Recently, I performed a SSRS Performance Review to tune performance reports for one of our customers. While analyzing some long running reports we identified some usual issues which may not be aware while we are designing the report or working on small dataset. But when the same report is migrated to Production DW with large dataset & with number of concurrent users fetching the report, the response time of the report degrades.

In the following presentation, I have listed the common tips and tricks to performance tune the reports

 

 

Hope this helps !!!

 

Parikshit Savjani
Premier Field Engineer

All about Kerberos in Microsoft BI

Last week, I got an opportunity to present on BI SQLPASS Virtual Chapter on All about Kerberos in Microsoft BI. Kerberos Troubleshooting is one of the highest call volume generator for Microsoft CSS and hence I thought I would choose this topic to educate our Customers on steps to configure Kerberos for SQL & SQL BI products along with caveats.

It was 1 hr 30 min session and I had demos to configure Kerberos for SQL, SSAS, SSRS in native mode and SSRS 2012 in SharePoint Integrated Mode explaining why do we need Dummy SPNs starting SSRS 2012 in SP Integrated Mode. Unfortunately due to remote delivery and lag over Live meeting, I couldn’t cover all my demos and hence I will soon be planning to deliver a second part of the session which includes configuring Kerberos over SSRS Native Mode and SP Integrated Mode.

The Presentation was recorded and the recording will soon be available on BI SQLPASS Presentation Archive

I have uploaded the presentation deck on Slide Share and I am embedding the same below.

Finally I would like to Thank SQL BI PASS Virtual Chapter Team to provide me this stage for presenting to Global audience. It was a great experience and I would encourage the readers, to present in the SQLPASS VC and contribute to the community.

I will update on FB & Twitter when my second session is scheduled

Improving the Performance of the reports by simplifying Drilldown Report to Drill through Report

Recently one of my customers in IT industry, had a P&L drilldown report aggregating the P&L metrics from the company level->client level ->account level -> project level -> employees at the row level while aggregating the data across date hierarchy Calendar Year->Calendar Semester ->Calendar Quarter -> Month at the Column Group Level with Adjacent Column Group aggregating the data across Geography Hierarchy Country -> State ->City.

The report has number of expressions to derive few P&L metrics and each row group had a sort order defined as well.

The report consists of a single tablix which was fetching around 1-2 lacs of records depending various parameterized filters. The stored procedure responsible for fetching the data for the report runs within 50 sec, while the report was taking 30-50 mins to render on the report server. Yes you read it correct it took 30-50 mins to render and in this blog post I would like to explain why it was taking so long and how we improved the performance by simplifying the report.

To reproduce the scenario, I created a similar drilldown tablix report using AdventureworksDW2012 database with following Row Grouping and Column Grouping

 Row Groups

Product Category -> Product Sub Category ->   Color -> Size -> Product

 Column Groups

Calendar Year -> Calendar  Semester -> Calendar Quarter -> English Month Name

 Adjacent Column Group
Country -> State ->  City -> Reseller

  Measure

Order Quantity

In my report, I did not add any expression since we could still see the performance dip if this report. The report takes approx 15 sec to load as compared to 123 ms to execute to the query used to fetch the data for the report.

Now, as we all know, whenever we want to tune performance of the report, we need to first check the execution log to identify where the report is spending more time

SELECT * FROM dbo.ExecutionLog3
WHERE ItemPath like '%Drilldown%'
ORDER By TimeStart DESC

As seen in the above Execution Log, the report is spending most of it in processing. In order to improve of the report we need to tune the report processing time. The Report Processing time is mainly time spend in performing grouping, aggregation, sorting evaluating expressions. Also, report processing is a memory intensive operation and if the report server is installed on a memory constrained server, it is likely to get OOM errors or slowness of reports on that server when this report is being processed.

For small sets of data, such reports doesn’t consume much memory and renders within acceptable time for interactive viewing. However as the granularity of dimensions is lowered, the dataset size increases and thereby the processing time and memory requirements increases exponentially and when such reports are rendering in production environment with concurrent users, the performance degrades.

To improve and tune the processing of such kind of reports is to simplify the design of the report. For such kind of reports, it also very important to understand the requirement of the business user, who will be consuming this report. Since analyzing 1 lac records of data interactively from a single report view is non-realistic expectation.

Most Business Users would first like to see highly summarized data at Top level of the Hierarchy and if the Metrics needs further investigation, they might want to drill down to the Next level for a specific Member of Top level Hierarchy. So in case of the above report, the analyst is interested to view the data at the Product Category Level, and if he wants to investigate further on Bikes Product Category, he needs to expand the Bikes Category to list all the Sub Category listed under Bikes Category.

To meet this requirement of the business user as well as simplify the design of the report, we can break the above report in 3-4 report as mentioned below

  • The Main Report returning the Data aggregated at Product Category Group Only with a Drill Through Action defined on Product category column passing current selected Category field as a parameter to the second level drill through report.
  • The Sub Category Level Report with Category as one of the hidden parameters which helps to filter the Dataset to that specific Category and aggregating the Data at Sub Category Level Alone with a Drill Through Action defined on Product SubCategory column passing current selected Sub Category as a parameter to the third level drill through report.
  • The Color->Size->Product Level Drill Down Report with Sub Category as hidden parameter which filters the Dataset to that specific Sub category and thereby reduces the dataset to 30-40K records which is rendered by row grouping of Drill down in report in few seconds

 

 

With this approach, the following is the rendering time for 3 reports. As seen in the collection time for rendering all 3 reports is around 8-9 sec which is a good improvement as compared to single Drill down report

 

Thus, by simplifying the drill down report to 3 simple reports using Drill Through Actions we are able to meet the requirements of business users as well as improve the processing of our reports to acceptable time for interactive viewing. In case of my customer, we reduced the time for rendering the report from 30-50 min to 20 sec which was well appreciated

Hope this helps!!!

Parikshit Savjani
Premier Field Engineer

Unable to run DTS Package fetching data from Oracle using ODBC Driver after migrating to 64 bit

Recently one of my customers migrated and upgraded their servers from SQL 2000 32 bit  to SQL 2005 64 bit following which their DTS packages stopped working. In this blog post I intend to explain the troubleshooting approach, we adopted to resolve the issue along with the resolution to the problem.

When you migrate DTS packages to SQL 2005 or above version of SQL Server, we have 2 options to run DTS packages either by using DTSRun.exe or by using DTExec.exe apart from running it through DTS Designer which is used for development.

Here are few things to take into consideration when migrating to 64 bit server.

  • DTS Packages designed in 32 bit environment can only run with 32 bit run time and cannot be migrated to 64 bit.
  • 32 bit runtime are not installed by default in the installation of the SQL Server and should be downloaded from the separate Feature pack
  • In a 64bit SQL Server instance, SQL Agent is also 64 bit and hence it cannot execute 32 bit package directly using SSIS Job step type. So in order to execute the 32 bit package using 64 bit SQLAgent, you need to use Operating System CmdExec Job step which calls 32 bit DTSRun.exe or 32 bit Dtexec.exe
  • If your 32 bit DTS Package is run by using 32 bit DTSRun.exe/Dtexec.exe, and if the DTS Package fetches the data from Oracle, you need to install 32 bit version of the Oracle client and not the 64 bit Oracle client.

From the Oracle side, we need to check the following

  • The tnsping to the Oracle service should be working fine. If tnsping fails, you will have check and verify the tnsnames.ora,network,IP,port,firewall components and resolve that first before you proceed.
  • From the Server, use command line sqlplus to log into the source oracle server and ensure the user & password is authenticated fine.
  • In the default installation of Oracle client, SQLNet Authentication is set to NTS which means the authentication will be performed at windows level and if your SQLAgent account doesn’t have an access to the Oracle server, it will fail so in general whenever you are running DTS package which fetches data from remote Oracle server, you should turn off the Windows Authentication of Oracle and rely on the Oracle’s native authentication

To achieve this, you need modify the following parameters in SQLNet.ora

SQLNet.ora can be found in %ORACLE_HOME%\network\admin folder

SQLNET.AUTHENTICATION=(none)

SQLNET.AUTHENTICATION_SERVICES=(none)

Once we have the tested and verified above points, we should first run the DTS Package from the command prompt using DTSRun.exe or dtexec.exe and see if the package runs successfully. If you see any error here, it should be resolved first before scheduling  the package to run as Operating System (CmdExec) Job step in SQLAgent Job.

Also  by default the SQLAgent Job runs under the SQLAgent service account credentials unless proxy is configured. So ensure SQL Agent service account has all the necessary permissions to execute the DTS package, execute the Oracle binaries, load the Oracle dlls and load in the SQL Instance.

In our case, we were able to able to run the DTS Package using command line but when we schedule to run the DTS package as a Job, we received the following error in the Job History

[Microsoft] [ODBC Driver Manager] Driver’s SQLAllocHandle on SQL_HANDLE_ENV failed

The above error message is a generic ODBC provider error and searching it in the internet might result in many irrevelant hits.

However, since we had the package running fine under command prompt, we had an advantage in troubleshooting. So we decided to capture a procmon trace during a successful run from cmd and failed run from SQLAgent.

Successful RUN
———————–

  • Download and copy procmon on the server
  • Capture procmon on the server, filtering “Process Name” Is “DTSRun.exe” OR “Dtexec.exe”
  • Run the package from cmd prompt
  • Save the procmon trace file.

Failed RUN
—————–

  • Capture procmon on the server, filtering “Process Name” Is “DTSRun.exe” OR “Dtexec.exe” AND  “Process Name” Is “SQLAGENT90.exe”  OR “SQLAGENT.exe”
  • Run the SQLAgent Job which fails
  • Save the procmon trace file.

Successful RUN
———————-

Open the procmon from successful run and filter on “Path” contains “oracle” (this is because oracle home will have keyword oracle) and observe the files accessed from oracle client

 

 

Failed RUN

Open the procmon from unsuccessful and filter on “Path” contains “oracle” and observe the files accessed from Oracle client

 

 

As clearly observed in the failed run, the DTSRun.exe cannot find oci.dll and hence it cannot locate the oracle home which causes it to fail. However in the cmd prompt run, it finds the oci.dll in correct oracle home and proceeds with the successful execution.

Our first guess is, is the ORACLE_HOME environment variable correctly set, & is it set correctly in regedit path as well HKLM\Software\Oracle or HKLM\Software\Wow6432mode\Oracle

However it was set correctly in our case, so what is going wrong here. If you observe procmon run of the failed instance, you will see that DTSRun.exe looks for oci.dll in various path but cannot locate it in any of the paths while in successful run it can locate it under correct path. So the path variable might be different in both the cases.

So to verify our hypothesis, we ran the SQLAgent Job with CMdExec Job step but with the following command

Path > “c:\path.txt”

We checked path.txt and found that path variable for SQLAgent service account did not had the path for oracle home %ORACLE_HOME%\bin due to which it was unable to locate oci.dll which explains the failure

To resolve the issue, we had to set the path environment variable for the user profile which runs sql agent service as PATH=%PATH%;%ORACLE_HOME% and restart the SQL Agent service.

Following the restart, the Job began to execute the DTS Package and it run happily ever after :)

If you like my posts, please consider sharing it viz Twitter,facebook,linkedin etc

Hope this helps !!!

Parikshit Savjani
Premier Field Engineer