27 Aug 2009

Unit Testing Report within Reporting Services: My Theory

I have been working with Microsoft SQL Server Reporting Services, since its first release, within many of my projects.  One of the biggest tasks, that I find with writing a report is the testing the reports and making sure the data that is displayed is correct.  My method of developing and testing reports is as follows:

  • Write the queries, outside Reporting Services, for creating the report data set(s) with the parameters defined which will be configured within the report.
    • Define the parameters that report data set would create and assign a valid value.
    • Execute the queries.
    • Inspect the results to make sure that they meet the expected results.
    • Repeat with a new valid value or fix any issues.
  • After one or two or even three passes that have worked then I take the queries and put them into report data sets.
  • Then I define the report layout.
  • Then I preview the report make sure it’s still correct
  • Then I release to our testers for testing.

With this amount of manual testing there could be a potential for mistakes to creep past for several reasons: The test data often isn’t diverse enough to test all the logic of the queries; The report data has a lot of rows which aren’t all inspected; Run out of time as the testing was under estimated so amount of testing is reduced.

I did, at the beginning of last year (2008), look into away of being able to test the reports by an automated tool.  I was looking for a way to compare the information that the report was displaying and compare it to some expected results.  The need to test the report itself and not just the underlying queries were for the following reasons:

General Reasons:

  • The Report could be doing more data manipulation outside the SQL queries that have been unit tested.
    • Embedded reporting calculations
    • Embedded VB.Net code
    • External class libraries code

(all of the above could have been unit tested separately but still could produce incorrect information if wired-up incorrectly)

  • Test visibility setting for sections of the reports.
  • Test any embedded report calculations
  • The Report could be using MDX and there isn’t an easy unit testing frame work for MDX

My Project Reasons:

  • All the reports used embedded SQL and MDX.
  • Writing reports against a 3rd party data warehouse and processes
  • There is no database project in which to use a unit test frame work for SQL.

Searching the internet I came across this article: Using Visual Studio 2005 to Perform Load Testing on a SQL Server 2005 Reporting Services Report Server, which gives a guide on how to stress test reports.  This is helpful to ensure that the reporting solution/architecture can handle the demands that are placed upon it.  I also thought that this method of testing can be used for 2 other forms of testing:

  • Simple unit testing: Ensure that report can render.  Placing this within a build process, that adds data into the reported database, then runs the web test to check if the report renders can help highlight if there have been changes to the database that breaks the build.   This can happen if the reports are based off some application database for simple reporting and the database is using an ORM tool. (Which at least one of our projects is doing).
  • Build verification testing: Does the report render? Running a web test after the deployment has happened would highlight any error in the deployment of the solution or deployment of the infrastructure.

I helped to get this method of testing implemented for the above mentioned reason for an application project, where it did help to capture and highlight builds that broke the reports. However as this didn’t meet the requirements fully I was looking as this method, which uses web tests, of testing as it was only inspecting the http responses.

It wasn’t until recently that I starting looking again at unit testing reports again because of my current project as testing the reports is one of the hardest jobs for the following reason: Developing reports based upon a platform so have no control of the flow of the data.  So I looked again at the beginning of this year (2009) and again didn’t found much there that would help me.  So I went to the drawing board to design my own approach and remember that in June 2008 I used Reporting Services in interesting way; I wrote a report which was only create to generate an dynamic XML document for a web page to consume and bind the data to another control (more information can be found here: Could HTTP Soap / T-SQL endpoint be replaced by Reporting Services XML Reports? ).  So I took a further look into the XML render options, to see how the reports that uses chart would render into XML, which my current project heavily utilises, and found that all the values of the data points within the charts were exposed.  This meant that I could test the reports in the way that I intended. 

My next challenge was to design the process that would render the report into XML and then perform some sort of test.  The test was easy, if the report was going to produce an XML document then I should be able determine the how the report will render the XML based upon the following: Expected results based upon the known set of data within data store that report is querying; Passing known values for the reports and knowing the XML setting for the reports; So I decided to do a comparison on the report XML against a prepared XML document.  The next problem was how to do the comparison as there wasn’t a tool that I could find. So I asked a few of my more .NET savvy guys as well with no joy. 

Then I remembered that SQL Server Integration Services (SSIS) has an XML task that does a XML Compare that compares the element’s attributes and more importantly the values.  So I set about creating a SSIS package to do the following:

  • Loop though a control file that contains the location of the reports to test; required parameters, values and location of the expect XML output.
  • Download the reports into XML format from the Report Server.
  • Then use the XML task to compare the results from the report server and compared against the expect results.

Based on some initial testing of this approach work for testing my reports the results were good.  However I haven’t been able to fully implement my approach for the following reasons:

  • Need to work on creating the test data for project.
  • Need to work on getting the test data through the 3rd party process.
  • Need to work out how to deal with time as many of my reports show time differently.
  • Need to work out how make it part of the build/test process.

Also recently I have found that my approach is similar to a tool that is available to buy from Innosphere

3 Aug 2009

SfTS V3: The Beta 1 reports new comers

As mentioned in my previous blog post, there are some new reports available in the Scrum for Team System V3 Beta 1 Template. I would like to give a high level overview of the new reports as there will be full report guidance available with the template explaining in more detail how to use the reports.
The new Diagnostic report isn’t really a new report, more of a spin off from another report. Version 2.x included a Version report that provided details about the status of the Team Foundation Server warehouse process. This covered information like:
  • When the process last ran
  • When the process finished loading data into the warehouse
  • How often the warehouse process would run
  • When the last time the cube was changed
Version2xReport

Team Foundation Server 2010 provides more information about the Team Foundation Server warehouse process, including:
  • Is the warehouse blocked from running
  • Is the OLAP processing blocked from running
  • The last full process started and ended time
  • The last incremental process started and ended time
WarehouseReport

As there was more information available we split the report out so it was less cluttered and easier to read. The Diagnostic report is really useful for investigating reporting issues such as reports showing out of date data.
The Sprint Team Aggregate report, which is new to version 3, is like a classic Sprint burndown, meaning that it shows the burndown of the Sprint Backlog Task, however the report will separately plot each selected team within the selected Sprint. There is also the option to show an aggregated total of the selected teams. This will give a comparison between how well the teams are working.

SprintTeamAggregate

The final, and saving the best till last, new report is the “Sprint Burndown – Tasks and Stories” report which provides a comparative view of story versus task work burndown at a Sprint or team level.

SprintBurnDownTaskStories

This is based upon some of the experiences from our agile coaches and other Scrum teams, about what the Sprint burndown should be focusing on: getting Product Backlog “Done”.. So this report shows how many story points have been burnt down as well as the traditional view of the Task burndown.

One of the issues that this report can show is how healthy the task burn down actually is within the sprint. The team(s) could be completing tasks but they are not helping to towards the completion of stories which can highlight quality issues and too much work in progress. Another use for this report is to highlight if the Product Backlog items have been correctly sized or Sprint Backlog Tasks have broken down correctly according to the size of the associated Product Backlog Item.

This report is also our first report that is making use of the Reporting Services 2008 enhanced charting features.